What is the best practice, to receive Data from a queue every second via php? I do this with an ajax query, what calls the php script every second. There, a connection object is created and a queue is declared every time. I tried to save this after the first time in a session variable, but when I call the PHP script a second time, I can't receive any more data. When I debug the channel object, I see that is_open is false:
protected' is_open' => boolean false
Here is my basic php test code:
<?php
require_once __DIR__ . '/vendor/autoload.php';
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
session_start(); # start session handling.
$id = $_GET["uid"];
$connected = $_GET["connected"];
if (empty($id)) {
$id = 0;
}
$queue = 'CyOS EV Queue ' . $id;
$reset = $_GET["reset"];
if ($reset === "true") {
session_destroy();
$_SESSION = array();
echo "session destroyed";
var_dump($_SESSION);
exit;
}
$connection;
$channel;
if (!isset($_SESSION['coneccted'])) {
$_SESSION['coneccted'] = true;
$connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest');
$channel = $connection->channel();
$channel->queue_declare($queue, false, false, false, false, false);
$channel->queue_bind($queue, 'CyOS-EX');
$_SESSION['connection'] = $connection;
$_SESSION['channel'] = $channel;
} else {
echo "already connected \n\r";
$connection = $_SESSION['connection'];
$channel = $_SESSION['channel'];
var_dump($_SESSION);
}
$test = new AMQPMessage();
while ($i < 10) {
echo "try to get data from " . $queue . "\n\r";
$test = $channel->basic_get($queue, true);
$i++;
if (isset($test)) {
echo "received data";
break;
}
}
echo $test->body;
When I initilize the connection and the channel every time I call the script then it works.
I presume the lines you are concerned about are these ones:
$connection = new AMQPStreamConnection('localhost', 5672, 'guest', 'guest');
$channel = $connection->channel();
$channel->queue_declare($queue, false, false, false, false, false);
$channel->queue_bind($queue, 'CyOS-EX');
Let's look at what's happening here:
Connect to the RabbitMQ server. This is like connecting to a database, or memcache, or any other external process, and needs to happen in each PHP request. You can't store the connection in the session, because it's not data, it's an active resource which will be closed when PHP exits.
Request the default Channel on the connection. This is really just part of the connection code, and shouldn't consume any significant time or resources.
Declare the queue. This will check if the queue already exists, and if it does, will do nothing. On the other hand, if you know the queue exists (because it's a permanent queue created in an admin interface, or you're sure another process will have created it) you can skip this line.
Bind the queue to the exchange. This is part of the setup of the queue; if the queue didn't exist and wasn't already bound, there would be nothing in it to consume until after this line runs. As with the previous step, can probably be skipped if you know it's happened elsewhere.
The normal way to avoid re-connecting (steps 1 and 2) is to have the consumer running in the background, e.g. starting a command-line PHP script using supervisord which runs continuously processing messages as they come in. However, that won't work if you need to get data back to the browser once it appears in the queue.
Common alternatives to polling and creating a new PHP process each time include:
Long polling, where the AJAX call waits until it has something to return, rather than returning an empty result.
Streaming the response (echoing each result from the PHP to the browser as you get it, but not ending the process).
WebSockets (I've not seen a good implementation in PHP, but one might be out there).
As I say, these are not specific to RabbitMQ, but apply to any time you're waiting for something to happen in a database, a file, a remote service, etc.
Related
I am using google client API to fetch status of my instances to make a local database copy.
It is possible that multiple scrips update my local copy. I could fetch the data, and while the data is travelling back to my server, some other script would modify the data. After I store the data from original fetch, a lost update is created.
Therefore I need to use transactions to block all other traffic to my table while I making making an update
This is the code for fetching:
<?php
require_once './gcloud/vendor/autoload.php';
$client = new Google_Client();
$client->setApplicationName('Google-ComputeSample/0.1');
$client->useApplicationDefaultCredentials();
$client->addScope('https://www.googleapis.com/auth/cloud-platform');
$project = 'project_id'; // TODO: Update placeholder value.
$zone = 'us-east1-b'; // TODO: Update placeholder value.
$instance = 'instance-1'; // TODO: Update placeholder value.
$mysqli = new mysqli($hn, $un, $pw, $db);
$mysqli->begin_transaction();
$listInstancesJSON = $service->instances->listInstances($project, $zone, []);
//store it
$mysqli->commit();
Blocking table while making a request sounds like a terrible idea. I think I'll add ini_set('max_execution_time', 5); at the start of the script, just in case fetch fails (I presume they use curl). In case of execution time exceeding 5s, would my table (or database) remain blocked even after script termination? Is there any other defence mechanism I should implement?
I plan to run this code as a cron job every minute.
It sounds like listInstances needs the equivalent of
SELECT ... FOR UPDATE
I would have a question about parallel socket connections in PHP.
I'm not a socket expert, that's why I'm asking this question here.
I created a library which is capable to persist data (string/int/float/array/object or anything that is losslessly serializable), like a cache, in PHP. I provided a part of my code, which I think causes the problem, but if you need additional info, visit: https://github.com/dude920228/php-cache
The concept was:
Create a TCP socket server
Accept connections
Read a serialized PHP array from the connecting client
Call the action handler of the action that is received from the client and do some stuff with the data is sent if necessary or write back to the client
Close the connection on the client side
It works like a charm with like 25 parallel connections, but for some reason, the clients do not receive the data they should get. At 100 parallel connections, only kinda half of the clients get the data. The program does not emit any errors and the only error control operator I use is at the socket_accept call in the main loop. The main loop looks like this:
public function run()
{
$this->socket = $this->ioHandler->createServerSocket();
while ($this->running) {
$this->maintainer->checkBackup(time(), $this->bucket);
$this->maintainer->maintainBucket($this->bucket);
if (($connection = #socket_accept($this->socket))) {
$clientId = uniqid();
socket_set_nonblock($connection);
$this->clients[$clientId] = $connection;
$read = $this->clients;
$write = array();
$except = array();
socket_select($read, $write, $except, null);
$dataString = $this->ioHandler->readFromSocket($connection);
$data = unserialize($dataString);
($this->actionHandler)($data, $this->bucket, $this->ioHandler, $connection);
$this->ioHandler->closeSocket($connection);
unset($this->clients[$clientId]);
}
}
$this->close();
}
Am I misusing the socket_select call there? Or is the problem in an other class? Thanks in advance for the answers!
I have an sqlite database that I query from PHP periodically. The query is always the same and it returns me a string. Once the string changes in the database the loop ends.
The following code is working, but I am pretty sure this is not the optimal way to do this...
class MyDB extends SQLite3
{
function __construct()
{
$this->open('db.sqlite');
}
}
$loop = True;
while ($loop == True) {
sleep(10);
$db = new MyDB();
if (!$db) {
echo $db->lastErrorMsg();
} else {
echo "Opened database successfully\n";
}
$sql = 'SELECT status from t_jobs WHERE name=' . $file_name;
$ret = $db->query($sql);
$state = $ret->fetchArray(SQLITE3_ASSOC);
$output = (string)$state['status'];
if (strcmp($output, 'FINISHED') == 0) {
$loop = False;
}
echo $output;
$db->close();
}
If you want to get an output immediately and a kind of interface, I think The best solution for your problem might be to use HTTP long polling. This way, it will not hold the connection for hours if the job is not done:
you will need to code a javascript snippet (in another html or php page) that runs an ajax call to your current php code.
Your web server (and so, your php code) will keep the connection opened for a while until the job is done or a time limit is reached (say 20-30 seconds)
if the job is not done, the javascript will make another ajax call and everything will start again, keeping the connection, etc... until you get the expected output status...
BEWARE : this solution will not work on any hosting provider
You will need to set the max_execution_time to a higher value than the default one see php doc for this.
I think you can find many things on http long polling with php/javascript on google / stack overflow...
Help me please to realise notifications of new messages for users.
Now i have this client code:
function getmess(){
$.ajax({
url:"notif.php",
data:{"id":id},
type:"GET",
success:function(result){
$("#count").html(result);
setTimeout('getmess',10000);
}
});
}
and this server code:
$mysqli = new mysqli('localhost', 'root', '', 'test');
if (mysqli_connect_errno()) {
printf("error: %s\n", mysqli_connect_error());
exit;
}
session_start();
$MY_ID = $_SESSION['id'];
while (true) {
$result = $mysqli->query("SELECT COUNT(*) FROM messages WHERE user_get='$MY_ID'");
if (mysqli_num_rows($result)) {
while ($row = mysqli_fetch_array($result)) {
echo $row[0]."";
}
flush();
exit;
}
sleep(5);
}
I have the problem that this script is not updating in real time when new message was added to database. But if I press button with onclick="getmess();" it works.
First, you check your database every 5 seconds, so you can't achieve real time - you have at least 5 seconds delay.
And second, there is no way you can achieve real-time by polling.
The way to deliver notifications nearly real time is to send the message by the same code that inserts into the database, e.g. you should not query the database for new records, but when there is a new record to send the data to the client. Even with a long-polling as a transport protocol.
How to achieve this? Unfortunately PHP is not a good choice. You need a non-blocking server to hold the connection, you need to know which connection waits for what data and you need a way from PHP (your backend) to notify this connection.
You can use the tornado-web server, node.js or nginx to handle the connections. You assign an identifier to each connection (probably you already have one - the userid), and when there is a new record added - the PHP script performs HTTP request to the notification server (tornado, node.js, nginx) saying what data to which user does this.
For nginx, take a look at nginx push stream
I have a page where few thousands of users can hit a method at the same time . I do have following code where I connect every time . Since this will go to a seperate memcache server will this cause slowdowns is there a way to connect just once and reuse that connection ? Do I have to close connection after every request ?
$primary_connected = $memcache_primary->connect($primary_memcache_server, 11211);
if($primary_connected){
$data = $memcache_primary->get($key);
if ($data != NULL) {
return data;
}
}
else{
/////Get data from database
}
If you are using the PHP memcached class (the one with the d on the end, not memcache) then yes, you can open a persistent connection.
You can pass a persistent ID to the constructor which will open a persistent connection and subsequent instances that use the same persistent ID will use that connection.
$memcached = new Memcached('method_name_or_persistent_identifier');
$memcached->addServer(...);
// use it
Hope that helps.
See Memcached::__construct() for more details.