curl within transaction - php

I am using google client API to fetch status of my instances to make a local database copy.
It is possible that multiple scrips update my local copy. I could fetch the data, and while the data is travelling back to my server, some other script would modify the data. After I store the data from original fetch, a lost update is created.
Therefore I need to use transactions to block all other traffic to my table while I making making an update
This is the code for fetching:
<?php
require_once './gcloud/vendor/autoload.php';
$client = new Google_Client();
$client->setApplicationName('Google-ComputeSample/0.1');
$client->useApplicationDefaultCredentials();
$client->addScope('https://www.googleapis.com/auth/cloud-platform');
$project = 'project_id'; // TODO: Update placeholder value.
$zone = 'us-east1-b'; // TODO: Update placeholder value.
$instance = 'instance-1'; // TODO: Update placeholder value.
$mysqli = new mysqli($hn, $un, $pw, $db);
$mysqli->begin_transaction();
$listInstancesJSON = $service->instances->listInstances($project, $zone, []);
//store it
$mysqli->commit();
Blocking table while making a request sounds like a terrible idea. I think I'll add ini_set('max_execution_time', 5); at the start of the script, just in case fetch fails (I presume they use curl). In case of execution time exceeding 5s, would my table (or database) remain blocked even after script termination? Is there any other defence mechanism I should implement?
I plan to run this code as a cron job every minute.

It sounds like listInstances needs the equivalent of
SELECT ... FOR UPDATE

Related

Laravel jobs/queue unclosed SQL Server database sessions

I noticed large amout of sessions running on database. Almost half of them have a query, take a look at . In my project I use queue worker to execute a code in background and use database as queue connection.
Here is the code I use:
Passing jobs to Batch:
$jobs = [];
foreach($data as $d){
$jobs[] = new EstimateImportJob($d);
}
$batch = Bus::batch($jobs)->dispatch();
Job source code:
$current_date = Carbon::now();
// Using these as tables don`t have increment traits
$last_po_id = \DB::connection("main")->table('PO')->latest('ID')->first()->ID;
$last_poline_id = \DB::connection("main")->table('POLINE')->latest('ID')->first()->ID;
$last_poline_poline = \DB::connection("main")->table('POLINE')->latest('POLINE')->first()->POLINE;
\DB::connection('main')->table('POLINE')->insert($d);
As I know Laravel is supposed to close DB connection after code executions is finished. But I can`t find a reason why I have so many database sessions. Any ideas would be appretiated!
Normally, even with working queue worker expected result is to have 3-4 database sessions.

Rabbitmq PHP consume every second

What is the best practice, to receive Data from a queue every second via php? I do this with an ajax query, what calls the php script every second. There, a connection object is created and a queue is declared every time. I tried to save this after the first time in a session variable, but when I call the PHP script a second time, I can't receive any more data. When I debug the channel object, I see that is_open is false:
protected' is_open' => boolean false
Here is my basic php test code:
<?php
require_once __DIR__ . '/vendor/autoload.php';
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
session_start(); # start session handling.
$id = $_GET["uid"];
$connected = $_GET["connected"];
if (empty($id)) {
$id = 0;
}
$queue = 'CyOS EV Queue ' . $id;
$reset = $_GET["reset"];
if ($reset === "true") {
session_destroy();
$_SESSION = array();
echo "session destroyed";
var_dump($_SESSION);
exit;
}
$connection;
$channel;
if (!isset($_SESSION['coneccted'])) {
$_SESSION['coneccted'] = true;
$connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest');
$channel = $connection->channel();
$channel->queue_declare($queue, false, false, false, false, false);
$channel->queue_bind($queue, 'CyOS-EX');
$_SESSION['connection'] = $connection;
$_SESSION['channel'] = $channel;
} else {
echo "already connected \n\r";
$connection = $_SESSION['connection'];
$channel = $_SESSION['channel'];
var_dump($_SESSION);
}
$test = new AMQPMessage();
while ($i < 10) {
echo "try to get data from " . $queue . "\n\r";
$test = $channel->basic_get($queue, true);
$i++;
if (isset($test)) {
echo "received data";
break;
}
}
echo $test->body;
When I initilize the connection and the channel every time I call the script then it works.
I presume the lines you are concerned about are these ones:
$connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest');
$channel = $connection->channel();
$channel->queue_declare($queue, false, false, false, false, false);
$channel->queue_bind($queue, 'CyOS-EX');
Let's look at what's happening here:
Connect to the RabbitMQ server. This is like connecting to a database, or memcache, or any other external process, and needs to happen in each PHP request. You can't store the connection in the session, because it's not data, it's an active resource which will be closed when PHP exits.
Request the default Channel on the connection. This is really just part of the connection code, and shouldn't consume any significant time or resources.
Declare the queue. This will check if the queue already exists, and if it does, will do nothing. On the other hand, if you know the queue exists (because it's a permanent queue created in an admin interface, or you're sure another process will have created it) you can skip this line.
Bind the queue to the exchange. This is part of the setup of the queue; if the queue didn't exist and wasn't already bound, there would be nothing in it to consume until after this line runs. As with the previous step, can probably be skipped if you know it's happened elsewhere.
The normal way to avoid re-connecting (steps 1 and 2) is to have the consumer running in the background, e.g. starting a command-line PHP script using supervisord which runs continuously processing messages as they come in. However, that won't work if you need to get data back to the browser once it appears in the queue.
Common alternatives to polling and creating a new PHP process each time include:
Long polling, where the AJAX call waits until it has something to return, rather than returning an empty result.
Streaming the response (echoing each result from the PHP to the browser as you get it, but not ending the process).
WebSockets (I've not seen a good implementation in PHP, but one might be out there).
As I say, these are not specific to RabbitMQ, but apply to any time you're waiting for something to happen in a database, a file, a remote service, etc.

PHP Open Multiple Connections

I would like to run multiple scripts instances of the same script in different browser tabs. And I would like them to have different MySQL connections. Each its unique connection.
I know that mysql_connect has a fourth parameter $new_link which should open a new link, but even that does not open a new connection, usually. Sometimes it does.
I have a XAMPP install on a Widows machine.
The question is: How can I absolutely force PHP/MySQL to open a new connections for each instance of a script? Script runs for about 2mins.
http://localhost/myscript.php
Here are the excerpts of the MySQL code. First load a work assignment from DB and mark it as in progress:
public function loadRange() {
try{
$this->db()->query('START TRANSACTION');
$this->row = $this->db()->getObject("
SELECT * FROM {$this->tableRanges}
WHERE
status = " . self::STATUS_READY_FOR_WORK . "
AND domain_id = {$this->domainId}
ORDER BY sort ASC
LIMIT 1");
if(!$this->row) throw new Exception('Could not load range');
$this->db()->update($this->tableRanges, $this->row->id, array(
'thread_id' => $this->id,
'status' => self::STATUS_WORKING,
'run_name' => $this->runName,
'time_started' => time(),
));
$this->db()->query('COMMIT');
} catch(Exception $e) {
$this->db()->query('ROLLBACK');
throw new Exception($e->getMessage());
}
}
Then the script may or may not INSERT rows in another table based on what it finds.
In the end, when task is finished, the assignment row is UPDATEd again:
$this->db()->update($this->tableRanges, $this->row->id, array(
'status' => self::STATUS_EXECUTED,
'time_finished' => time(),
'count' => $count,
));
In particular, the $this->tableRanges table looks to be locked. Any idea why it is the case? It is an InnoDB table.
I would like to run multiple scripts instances of the same script in different browser tabs. And I would like them to have different MySQL connections. Each its unique connection.
This is actually the case, without any additional effort
The question is: How can I absolutely force PHP/MySQL to open a new connections for each instance of a script.
Answer: do nothing :)
every time you hit http://localhost/myscript.php a new instance is run. Everything about that instance is unique, the web server spawns a new PHP thread, in which all the resources, connections, variables are unique.
Only state management devices such as sessions are shared and that too if you are using different tabs in same browser. If you hit the same URL with different browsers, the state management resources are different too.
To answer your question, like others mentioned before - your connection is different for each instance IF you are using mysql_connect. You could create a persistent connection that does not close when the application exits and reuses it for new connection requests using mysql_pconnect. But in your code it seems you are using the latter and in that case, you are fine.
You can try to set the isolation read level to prevent table stalling while reading for select
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
More information can be found here.
Again I guess it will take a bit of playing around to find which option works the best.

How can I get php pdo code to keep retrying to connect if there are too many open connections?

I have an issue, it has only cropped up now. I am on a shared web hosting plan that has a maximum of 10 concurrent database connections. The web app has dozens of queries, some pdo, some mysql_*.
Loading one page in particular peaks at 5-6 concurrent connections meaning it takes a minimum of 2 users loading it at the same time to spit an error on one or both of them.
I know this is inefficient, I'm sure I can cut that down quite a bit, but that's what my idea is at the moment is to move the pdo code into a function and just pass in a query string and an array of variables, then have it return an array (partly to tidy my code).
THE ACTUAL QUESTION:
How can I get this function to continue to retry until it manages to execute, and hold up the script that called it (and any script that might have called that one) until it manages to execute and return it's data? I don't want things executing out of order, I am happy with code being delayed for a second or so during peak times
Since someone will ask for code, here's what I do at the moment. I have this in a file on it's own so I have a central place to change connection parameters. the if statement is merely to remove the need to continuously change the parameters when I switch from my test server to the liver server
$dbtype = "mysql";
$server_addr = $_SERVER['SERVER_ADDR'];
if ($server_addr == '192.168.1.10') {
$dbhost = "localhost";
} else {
$dbhost = "xxxxx.xxxxx.xxxxx.co.nz";
}
$dbname = "mydatabase";
$dbuser = "user";
$dbpass = "supersecretpassword";
I 'include' that file at the top of a function
include 'db_connection_params.php';
$pdo_conn = new PDO("mysql:host=$dbhost;dbname=$dbname", $dbuser, $dbpass);
then run commands like this all on the one connection
$sql = "select * from tbl_sub_cargo_cap where sub_model_sk = ?";
$capq = $pdo_conn->prepare($sql);
$capq->execute(array($sk_to_load));
while ($caprow = $capq->fetch(PDO::FETCH_ASSOC)) {
//stuff
}
You shouldn't need 5-6 concurrent connections for a single page, each page should only really ever use 1 connection. I'd try to re-architect whatever part of your application is causing multiple connections on a single page.
However, you should be able to catch a PDOException when the connection fails (documentation on connection management), and then retry some number of times.
A quick example,
<?php
$retries = 3;
while ($retries > 0)
{
try
{
$dbh = new PDO("mysql:host=localhost;dbname=blahblah", $user, $pass);
// Do query, etc.
$retries = 0;
}
catch (PDOException $e)
{
// Should probably check $e is a connection error, could be a query error!
echo "Something went wrong, retrying...";
$retries--;
usleep(500); // Wait 0.5s between retries.
}
}
10 concurrent connections is A LOT. It can serve 10-15 online users easily.
Heavy efforts needed to exhaust them.
So there is something wrong with your code.
There are 2 main reasons for it:
slow queries take too much time and thus serving one hit uses one mysql connection for too long.
multiple connections opened from every script.
The former one have to be investigated but for the latter one it's simple:
Do not mix myqsl_ and PDO in one script: you are opening 2 connections at a time.
When using PDO, open connection only once and then use it throughout your code.
Reducing the number of connections in one script is the only way to go.
If you have multiple instances of PDO class in your code, you will need to add that timeout handling code you want to every call. So, heavy code rewriting required anyway.
Replace these new instances with global $pdo; instead. It will take the same amount of time but it will be permanent solution, not temporary patch as you want it.
Please be sensible.
PHP automatically closes all the connections st the end of the script, you don't have to care about closing them manually.
Having only one connection throughout one script is a common practice. It is used by ALL the developers around the world. You can use it without any doubts. Just use it.
If you have transaction and want to log something in database you sometimes need 2 connections in one script

How does locking tables work?

I have a php script that will be requested several times "at the same time" I also have a field in a table let's call it persons as a flag for active/inactive. I want when the first instance of the script runs to set that field to inactive so that the rest instances will die when they check that field. Can someone provide a solution for that? How can I ensure that this script will run only once?
PHP, PDO, MySQL
Thank you very much in advance.
Your script should fetch the current flag within a transaction using a locking read, such as SELECT ... FOR UPDATE:
$dbh = new PDO("mysql:dbname=$dbname", $username, $password);
$dbh->setAttribute(PDO::ATTR_EMULATE_PREPARES, FALSE);
$dbh->beginTransaction();
// using SELECT ... FOR UPDATE, MySQL will hold all other connections
// at this point until the lock is released
$qry = $dbh->query('SELECT persons FROM my_table WHERE ... FOR UPDATE');
if ($qry->fetchColumn() == 'active') {
$dbh->query('UPDATE my_table SET persons = "inactive" WHERE ...');
$dbh->commit(); // releases lock so others can see they are inactive
// we are the only active connection
} else {
$dbh->rollBack();
// we are inactive
}
You can use MySQL's own 'named' locking functions without ever having to lock a table: http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_get-lock
e.g. try get_lock('got here first', 0) with a 0 timeout. if you get a lock, you're first in the gate, and any subsequent requests will NOT get the lock and immediately abort.
however, be careful with this stuff. if you don't clean up after yourself and the client which gained the lock terminates abnormally, the lock will not be released and your "need locks for this" system is dead in the water until you manually clear the lock.

Categories