Database locked while trying to access from PHP script - php

I am writing an Android app which communicates with a PHP backend. The backend db is SQLite 3. The problem is, I am getting this error intermittently PHP Warning: SQLite3::prepare(): Unable to prepare statement: 5, database is locked. I am opening a connection to the database in each PHP file and closing it when the script finishes. I think the problem is that one script locked the database file while writing to it and the second script was trying to access it, which failed. One way of avoiding this would be to share a connection between all of the php scripts. I was wondering if there is any other way of avoiding this?
Edit:
This is the first file:
<?php
$first = SQLite3::escapeString($_GET['first']);
$last = SQLite3::escapeString($_GET['last']);
$user = SQLite3::escapeString($_GET['user']);
$db = new SQLite3("database.db");
$insert = $db->prepare('INSERT INTO users VALUES(NULL,:user,:first,:last, 0 ,datetime())');
$insert->bindParam(':user', $user, SQLITE3_TEXT);
$insert->bindParam(':first', $first, SQLITE3_TEXT);
$insert->bindParam(':last', $last, SQLITE3_TEXT);
$insert->execute();
?>
Here is the second file:
<?php
$user = SQLite3::escapeString($_GET['user']);
$db = new SQLite3("database.db");
$checkquery = $db->prepare('SELECT allowed FROM users WHERE username=:user');
$checkquery->bindParam(':user', $user, SQLITE3_TEXT);
$results = $checkquery->execute();
$row = $results->fetchArray(SQLITE3_ASSOC);
print(json_encode($row['allowed']));
?>

First, when you are done with a resource you should always close it. In theory it will be closed when it is garbage-collected, but you can't depend on PHP doing that right away. I've seen a few databases (and other kinds of libraries for that matter) get locked up because I didn't explicitly release resources.
$db->close();
unset($db);
Second, Sqlite3 gives you a busy timeout. I'm not sure what the default is, but if you're willing to wait a few seconds for the lock to clear when you execute queries, you can say so. The timeout is in milliseconds.
$db->busyTimeout(5000);

I was getting "database locked" all the time until I found out some features of sqlite3 must be set by using SQL special instructions (i.e. using PRAGMA keyword). For instance, what apparently solved my problem with "database locked" was to set journal_mode to 'wal' (it is defaulting to 'delete', as stated here: https://www.sqlite.org/wal.html (see Activating And Configuring WAL Mode)).
So basically what I had to do was creating a connection to the database and setting journal_mode with the SQL statement. Example:
<?php
$db = new SQLite3('/my/sqlite/file.sqlite3');
$db->busyTimeout(5000);
// WAL mode has better control over concurrency.
// Source: https://www.sqlite.org/wal.html
$db->exec('PRAGMA journal_mode = wal;');
?>
Hope that helps.

Related

How can I bring data from MariaDB hosted in a server using a php file?

We have a remote server containing a SQL MariaDB. I have to write a piece of code to be placed in that same server whose mission is to execute querys asking for data, modify that data and send it to an external api hosted in another server. When I was shown the DB, it was through ssh commands and entering sql mode inside the server rather than trough code like PHP as I have always done it before.
So, my code is to placed in the same server as the DB, brings the data, modifys some info and calls the api to upload it.
As I said, I am completely lost so my question is simple: can this be achieved? if so, how?
I've read about ssh_connect and exec, but since the code will be placed in the same server I don't think this is necessary, correct me if I am wrong. I can't place any code since I don't know how to start.
Thank you guys for all the help, I am closing the question now:
All I had to do was to use PDO as a secure way to establish a connection and to prepare and execute the querys. Remember I placed my php file in the same server that hosts the DB and note that I had to create a user and grant permissions to the DB you can find how in one of the comments above or here. Here is the code:
try {
$conn = new PDO('mysql:host=yourhostserver;dbname=dbname','user','password');
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$conn->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
}catch(PDOException $e){
echo "ERROR: " . $e->getMessage();
}
//Example of query
$stmt = $conn->prepare('SELECT GROUP_CONCAT(DISTINCT source_external_subscriber_id) AS ids FROM cdr');
$stmt->execute();
foreach ($stmt as $row) {
$string = $row['ids'];
}

best practise to do many connections mysqli

I received max_user_connections error this days. and I was wondering if I am doing something wrong.
I have a config.php file with mysqli connection script:
$mysqli = new mysqli('localhost', 'my_user', 'my_password', 'my_db');
so pages where I need to get something in mysqli I include config.php. here is an example:
example.php
<?php
include_once("config.php");
$stmt = $mysqli->prepare("select...");
$stmt->execute();
$stmt->bind_result(...,...);
while($stmt->fetch()) {
...
}
$stmt->close();
?>
some html <p> <img>...
<?php
$stmt = $mysqli->prepare("select...");
$stmt->execute();
$stmt->bind_result(...,...);
while($stmt->fetch()) {
...
}
$stmt->close();
?>
some html <p> <img>...
<?php
$stmt = $mysqli->prepare("select...");
$stmt->execute();
$stmt->bind_result(...,...);
while($stmt->fetch()) {
...
}
$stmt->close();
?>
So, my question is: is it the best practise to do selects like this? should I close mysqli connect after each select and open again? or do selects on the top together without separete than with some html in the middle?
the best practise to do selects like this?
I hate it when people use the term "best practice" it's usually a good indicator they don't know what they are talking about.
The advantage of your approach is that its nice and simple. But as you've discovered, it does not scale well.
In practice its quite hard to measure, but in most applications the MySQL connection is unused for most of the lifecycle of the script. Hence there are usually big wins to be made by delaying the opening of connection and closing it as soon as possible.
The former can be done by decorating the mysqli class, the connect method just stores the credentials while anything which needs to talk to the database should call the wrapped connect method when it needs access to the database. However typically the yeild of this approach is low unless your code creates a lot of database connections which are never used (in which case a cheaper solution would be to increase the connection limit).
It can take a long time after the last query is run before the connection is closed down. Explicitly closing the connection addresses this, but requires a lot of code changes.
do not open and close a connection for each query. Although it will result in a reduced number of connections to the databasee, the performance will suck
The biggest win usually comes from optimizing your queries - reduced concurrency and a better user experience.

Php adodb CacheExecute com_exception arguments wrong type

I'm trying to get ADODB caching to work. I have a php script where i define the DB connection.
global $conn;
$conn = new COM ("ADODB.Connection");
$connStr = "PROVIDER-SQLOLEDB;SERVER=;UID=;PWD=;DATABASE=);
$conn->open($connStr);
I left the unnecessary details out of the picture.
Then in some other script i import the connection.php, and then try to make a normal query.
$query = "SELECT * from table where some_id = 21540 and other_id = BOGUS_INFO"
$rs = $GLOBALS['conn']->CacheExecute(60,$query);
This returns Uncaught exception 'com_exception'.. ADODB.Connection Arguments are of the wrong type,are out of acceptable range, or are in conflict with another.
I'm baffled because the next line of code works flawlessly.
$rs = $GLOBALS['conn']->execute($query); //OK!
Any ideeas?
I also tried CacheGetOne but i get the same error.
Could it be from the way i defined this thing below? (it's literally like that in my code)
$GLOBALS['ADODB_CACHE_DIR']=$_SERVER['DOCUMENT_ROOT'].'/../cache/adodb';
Well after alot of hassle, i kinda found an answer by choosing another way of doing things. I downloaded the latest ADODB build. Inserted it in my project, and modified files accordingly:
The connection.php changed to:
require('PATH/adodb.inc.php');
require('PATH/adodb-csvlib.inc.php');//read somewhere that i need this for the caching executes
$GLOBALS['ADODB_CACHE_DIR'] = $_SERVER['DOCUMENT_ROOT'].'/cache/adodb';
global $ADODB_CACHE_DIR; //don't know which one adodb usese really to identify cache directory so for safety - both
$ADODB_CACHE_DIR = $_SERVER['DOCUMENT_ROOT'].'/cache/adodb';
$conn = NewADOConnection('mssqlnative');//i tried first with mssql simple but script terminated execution on execute() attempt.. no error.. no nothing.. no output .. strange
$conn->Connect($myServer, $myUser, $myPass, $myDb);
After that i had to fiddle a bit with the code because,
$rs = $conn->CacheExecute(time,query)
returns Adodbrecordset_array_mssqlnative Object, and not an array, and, in my code i used to display and manipulate values as
while (!$rs->EOF) {
$rs['row']->value;
$rs->MoveNext();
}
and now they should be
$rs->fields['row'];
Another tricky thing was getting the fields array to be associated to the names of the columns in my query, but after a short search i discovered
$GLOBALS['conn']->SetFetchMode(ADODB_FETCH_ASSOC);
and voila! Everything works, even the caching.
It took script execution times with this bare optimisation from 1 sec to 0.1 or even 0.005 sometimes.

How can I get php pdo code to keep retrying to connect if there are too many open connections?

I have an issue, it has only cropped up now. I am on a shared web hosting plan that has a maximum of 10 concurrent database connections. The web app has dozens of queries, some pdo, some mysql_*.
Loading one page in particular peaks at 5-6 concurrent connections meaning it takes a minimum of 2 users loading it at the same time to spit an error on one or both of them.
I know this is inefficient, I'm sure I can cut that down quite a bit, but that's what my idea is at the moment is to move the pdo code into a function and just pass in a query string and an array of variables, then have it return an array (partly to tidy my code).
THE ACTUAL QUESTION:
How can I get this function to continue to retry until it manages to execute, and hold up the script that called it (and any script that might have called that one) until it manages to execute and return it's data? I don't want things executing out of order, I am happy with code being delayed for a second or so during peak times
Since someone will ask for code, here's what I do at the moment. I have this in a file on it's own so I have a central place to change connection parameters. the if statement is merely to remove the need to continuously change the parameters when I switch from my test server to the liver server
$dbtype = "mysql";
$server_addr = $_SERVER['SERVER_ADDR'];
if ($server_addr == '192.168.1.10') {
$dbhost = "localhost";
} else {
$dbhost = "xxxxx.xxxxx.xxxxx.co.nz";
}
$dbname = "mydatabase";
$dbuser = "user";
$dbpass = "supersecretpassword";
I 'include' that file at the top of a function
include 'db_connection_params.php';
$pdo_conn = new PDO("mysql:host=$dbhost;dbname=$dbname", $dbuser, $dbpass);
then run commands like this all on the one connection
$sql = "select * from tbl_sub_cargo_cap where sub_model_sk = ?";
$capq = $pdo_conn->prepare($sql);
$capq->execute(array($sk_to_load));
while ($caprow = $capq->fetch(PDO::FETCH_ASSOC)) {
//stuff
}
You shouldn't need 5-6 concurrent connections for a single page, each page should only really ever use 1 connection. I'd try to re-architect whatever part of your application is causing multiple connections on a single page.
However, you should be able to catch a PDOException when the connection fails (documentation on connection management), and then retry some number of times.
A quick example,
<?php
$retries = 3;
while ($retries > 0)
{
try
{
$dbh = new PDO("mysql:host=localhost;dbname=blahblah", $user, $pass);
// Do query, etc.
$retries = 0;
}
catch (PDOException $e)
{
// Should probably check $e is a connection error, could be a query error!
echo "Something went wrong, retrying...";
$retries--;
usleep(500); // Wait 0.5s between retries.
}
}
10 concurrent connections is A LOT. It can serve 10-15 online users easily.
Heavy efforts needed to exhaust them.
So there is something wrong with your code.
There are 2 main reasons for it:
slow queries take too much time and thus serving one hit uses one mysql connection for too long.
multiple connections opened from every script.
The former one have to be investigated but for the latter one it's simple:
Do not mix myqsl_ and PDO in one script: you are opening 2 connections at a time.
When using PDO, open connection only once and then use it throughout your code.
Reducing the number of connections in one script is the only way to go.
If you have multiple instances of PDO class in your code, you will need to add that timeout handling code you want to every call. So, heavy code rewriting required anyway.
Replace these new instances with global $pdo; instead. It will take the same amount of time but it will be permanent solution, not temporary patch as you want it.
Please be sensible.
PHP automatically closes all the connections st the end of the script, you don't have to care about closing them manually.
Having only one connection throughout one script is a common practice. It is used by ALL the developers around the world. You can use it without any doubts. Just use it.
If you have transaction and want to log something in database you sometimes need 2 connections in one script

Best approach to see if a MySQL Server is up and running

I have a Master - Slave setup for a web application written in PHP. I have a pool of slaves I use for reading, and a Master that is used for writes (and reads if a write has been sent this request). I would like to incorporate an automated system for removed crashed servers from the read pool. Currently I am using:
foreach($readers as $reader)
{
$fp = #fsockopen($reader['host'],3306,$errno,$errstr,1);
if(!$fp)
{
//Remove from pool
}
unset($fp);
}
My primary question is there a more reliable method. I have had quite a few false positives, and vice versa because it is not actually checking for a MySQL server, but rather just a connection on port 3306. Is there a way to check for a MySQL server without raising an exception, which is the behaviour of the PDO and MySQLi extensions in PHP.
You could just use mysql_connect() and check the result for false, and close the connection right away on success. You can make a dummy account with no privileges for that if you like.
That's really the only reliable way, especially if you want to distinguish a running MySQL server from any other random process listening on port 3306.
You could use mysql_ping() to check if a current DB Connection you have open is still alive
Here is the example posted at http://www.php.net/manual/en/function.mysql-ping.php
<?php
set_time_limit(0);
$conn = mysql_connect('localhost', 'mysqluser', 'mypass');
$db = mysql_select_db('mydb');
/* Assuming this query will take a long time */
$result = mysql_query($sql);
if (!$result) {
echo 'Query #1 failed, exiting.';
exit;
}
/* Make sure the connection is still alive, if not, try to reconnect */
if (!mysql_ping($conn)) {
echo 'Lost connection, exiting after query #1';
exit;
}
mysql_free_result($result);
/* So the connection is still alive, let's run another query */
$result2 = mysql_query($sql2);
?>
The best way to check if any service is alive is to actually use it. So for MySQL try to connect and execute some fast query, for web server try to fetch some file, for PHP try to fetch some simple script...
For MySQL master/slave setup, one of the solutions is to actually check the state of replication. You can check how many transactions is the slave behind master and decide to stop using that slave when/while it has old data. (I don't do the replication myself, but I think you need to compare the variables Read_Master_Log_Pos and Relay_Log_Pos)

Categories