Detecting no available server with PHP and Gearman - php

I'm currently making use of Gearman with PHP using the standard bindings (docs here). All functioning fine, but I have one small issue with not being able to detect when a call to GearmanClient::addServer (docs here) is "successfull", by which I mean...
The issue is that adding the server attempts no socket I/O, meaning that the server may not actually exist or be operational. This means that subsequent code calls (in the scenario where the sever does not infact exist) fail and result in PHP warnings
Is there any way, or what is the best way, to confirm that the Gearman Daemon is operational on the server before or after adding it?
I would like to achieve this so that I can reliably handle scenarios in which Gearman may have died, or the server is un-contactable perhaps..
Many thanks.

We first tried this by manually calling fsockopen on the host and port passed to addServer, but it turns out that this can leave a lot of hanging connections as the Gearman server expects something to happen over that socket.
We use a monitor script to check the status of the daemon and its workers — something similar to this perl script on Google Groups. We modified the script to restart the daemon if it was not running.
If this does not appeal, have a look at the Gearman Protocol (specifically the “Administrative Protocol” section, referenced in the above thread) and use the status command. This will give you information on the status of the jobs and workers, but also means you can perform a socket connection to the daemon and not leave it hanging.

You can use this library: https://github.com/necromant2005/gearman-stats
It has no external dependencies.
$adapter = new \TweeGearmanStat\Queue\Gearman(array(
'h1' => array('host' => '10.0.0.1', 'port' => 4730, 'timeout' => 1),
'h2' => array('host' => '10.0.0.2', 'port' => 4730, 'timeout' => 1),
));
$status = $adapter->status();
var_dump($status);

Related

PHP CLI corrupts executed file

Problem description
Sometimes after running PHP CLI, the primary executed PHP file is erased. It's simply gone.
The file is missing / corrupted. Undelete (Netbeans History - revert delete) still shows the file, but it is not possible to recover it. It is also not possible to replace / reinstate the file with the identical filename.
Trial and error attempts
The issue occurs on (3) different computers, all Windows 10 or Windows 11.
I've used different versions of PHP (php-7.3.10-Win32-VC15-x64, php-8.1.4-Win32-vs16-x64).
The code is not doing any file IO at all. It uses a React event loop, listening to a websocket - "server_worklistobserver.php":
<?php
require __DIR__ . '/vendor/autoload.php';
$context = array(
'tls' => array(
'local_cert' => "certificate.crt",
'local_pk' => "certificate.key",
'allow_self_signed' => true, // Set to false in production
'verify_peer' => false
)
);
// Set up websocket server
$loop = React\EventLoop\Factory::create();
$application = new Ratchet\App('my.domain.com', 8443, 'tls://0.0.0.0', $loop, $context);
// Set up controller component
$controller = new WorklistObserver();
$application->route('/checkworklist', $controller, array('*'));
$application->run();
die('-server stopped-');
The disappearance happens when the PHP execution is cancelled. Either by Ctrl-Break, or when run as a service, a service stop/restart.
Execution is started by: php.exe server_worklistobserver.php in a dos-box (cmd).
Using administrator permissions has no effect. Performing a harddisk scan has no effect; there are no issues found. The issue is rather persistent, but not regular; it seems to happen by chance.
Associated PHP files are left intact.
The issue has never occurred on the Apache driven PHP execution.
Please help
What could I do different? Does have anyone have a similar experience? I can't find anything alike on the internet...
Thanks in advance.
Thanks so much, Chris! Indeed, I found the entries in the quarantaine of my virusscanner. I've added an exception rule and don't expect to have this happen again.

"no alive nodes found in cluster" while indexing docs

I have a "legacy" php application that we just migrated to run on Google Cloud (Kubernetes Engine). Along with it I also have a ElasticSearch installation (Elastic Cloud on Kubernetes) running. After a few incidents with Kubernetes killing my Elastic Search when we're trying to deploy other services we have come to the conclusion that we should probably not run ES on Kubernetes, at least if are to manage it ourselves. This due to a apparent lack of knowledge for doing it in a robust way.
So our idea is now to move to managed Elastic Cloud instead which was really simple to deploy and start using. However... now that I try to load ES with the data needed for our php application if fails mid-process with the error message no alive nodes found in cluster. Sometimes it happens after less than 1000 "documents" and other times I manage to get 5000+ of them indexed before failure.
This is how I initialize the es client:
$clientBuilder = ClientBuilder::create();
$clientBuilder->setElasticCloudId(ELASTIC_CLOUD_ID);
$clientBuilder->setBasicAuthentication('elastic',ELASTICSEARCH_PW);
$clientBuilder->setRetries(10);
$this->esClient = $clientBuilder->build();
ELASTIC_CLOUD_ID & ELASTICSEARCH_PW are set via environment vars.
The request looks something like:
$params = [
'index' => $index,
'type' => '_doc',
'body' => $body,
'client' => [
'timeout' => 15,
'connect_timeout' => 30,
'curl' => [CURLOPT_HTTPHEADER => ['Content-type: application/json']
]
The body and which index depends on how far we get with the "ingestion", but generally pretty standard stuff.
All this works without any real problems when running on a own installation of Elastic in our own GKE cluster.
What I've tried so far is to add the retries and timeouts, but none of that seems to make much of a difference?
We're running:
php 7.4
ElasticSearch 7.11
Elastic Search client 7.12 (php via composer)
If you use WAMP64, this error will occur, You have to use XAMPP instead.
Try the following command in the command prompt, If it runs, there is a problem with your configurations.
curl -u elastic:<password> https://<endpoint>:<port>
(Ex for Elastic Cloud)
curl -u elastic:<password> example.es.us-central1.gcp.cloud.es.io:9234

How to check if a fifo pipe is writable in PHP

I want to transfer messages from a PHP-based web-frontend to a backend service on a linux server.
I am sending the messages with file_put_contents. The interface works well when the backend service listens and reads the pipe created with mkfifo mypipe.
However, I would like to be prepared for a situation in which the backend service fails. In this case the user of the frontend should be be notified and given alternative options.
Currently, when the backend is not running, the frontend becomes unresponsive, because file_put_contents blocks.
I tried various things to solve the problem. This includes trying to open the pipe with fopen before stream_context_create and setting a timeout with ini_set('default_socket_timeout', 10);
or
$context = stream_context_create(array('http'=>array(
'timeout' => 10
)
));
if(file_put_contents("mypipe",$data,FILE_APPEND | LOCK_EX,$context)==FALSE){
error_log("Could not write to pipe.");
}else{
echo "Sent message";
}
I also tried the PHP-function is_writable("mypipe"), but, as expected, it says yes independent of whether the receiver is listening.
How can I check if the pipe blocks and evade the situation that the frontend becomes unresponsive.
Do fopen, stream_set_blocking, fwrite, and fclose instead of using file_put_contents. When you do that, you'll be able to detect when it would block because fwrite will just return that it didn't write anything instead of blocking.

PHP SOAP awfully slow: Response only after reaching fastcgi_read_timeout

The constructor of the SOAP client in my wsdl web service is awfully slow, I get the response only after the fastcgi_read_timeout value in my Nginx config is reached, it seems as if the remote server is not closing the connection. I also need to set it to a minimum of 15 seconds otherwise I will get no response at all.
I already read similar posts here on SO, especially this one
PHP SoapClient constructor very slow and it's linked threads but I still cannot find the actual cause of the problem.
This is the part which takes 15+ seconds:
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl");
It seems as it is only slow when called from my php script, because the file opens instantly when accessed from one of the following locations:
wget from my server which is running the script
SoapUI or Postman (But I don't know if they cached it before)
opening the URL in a browser
Ports 80 and 443 in the firewall are open. Following the suggestion from another thread, I found two work arounds:
Loading the wsdl from a local file => fast
Enabling the wsdl cache and using the remote URL => fast
But still I'd like to know why it doesn't work with the original URL.
It seems as if the web service does not close the connection, or in other words, I get the response only after reaching the timeout set in my server config. I tried setting keepalive_timeout 15; in my Nginx config, but it does not work.
Is there any SOAP/PHP parameter which forces the server to close the connection?
I was able to reproduce the problem, and found the solution to the issue (works, maybe not the best) in the accepted answer of a question linked in the question you referenced:
PHP: SoapClient constructor is very slow (takes 3 minutes)
As per the answer, you can adjust the HTTP headers using the stream_context option.
$client = new SoapClient("https://apps.correios.com.br/SigepMasterJPA/AtendeClienteService/AtendeCliente?wsdl",array(
'stream_context'=>stream_context_create(
array('http'=>
array(
'protocol_version'=>'1.0',
'header' => 'Connection: Close'
)
)
)
));
More information on the stream_context option is documented at http://php.net/manual/en/soapclient.soapclient.php
I tested this using PHP 5.6.11-1ubuntu3.1 (cli)

Slow DynamoDB for php session handling

we're using DynamoDB in order to synchronize sessions between more than one EC2 machine under ELBs.
We noticed that this method slow down a lot the scripts.
Specifically, I made a js that calls 10 times 3 different php scripts on the server.
1) The first one is just an echo timestamp(); and takes about 50ms as roundtrip time.
2) The second one is a php script that connect through mysqli to the RDS MySQL and takes the same time (about 50-60ms).
3) The third script use the DynamoDB session keeping method described in official AWS documentation and takes about 150ms (3 times slower!!).
I'm cleaning the garbage every night (as documentation say) and the DynamoDB metrics seems OK (attached below).
The code I use is this:
use Aws\DynamoDb\DynamoDbClient;
use Aws\DynamoDb\Session\SessionHandler;
ini_set("session.entropy_file", "/dev/urandom");
ini_set("session.entropy_length", "512");
ini_set('session.gc_probability', 0);
require 'aws.phar';
$dynamoDb = DynamoDbClient::factory(array(
'key' => 'XXXXXX',
'secret' => 'YYYYYY',
'region' => 'eu-west-1'
));
$sessionHandler = SessionHandler::factory(array(
'dynamodb_client' => $dynamoDb,
'table_name' => 'sessions',
'session_lifetime' => 259200,
'consistent_read' => true,
'locking_strategy' => null,
'automatic_gc' => 0,
'gc_batch_size' => 25,
'max_lock_wait_time' => 15,
'min_lock_retry_microtime' => 5000,
'max_lock_retry_microtime' => 50000,
));
$sessionHandler->register();
session_start();
Am I doing something wrong, or is it normal all that time to retrieve the session?
Thanks.
Copying correspondence from an AWS engineer in AWS forums: https://forums.aws.amazon.com/thread.jspa?messageID=597493
Here a couple things to check:
Are you running your application on EC2 in the same region as your DynamoDB table?
Have you enabled OPcode caching to ensure that the classes used by the SDK do not need to be loaded from disk and parsed each time your
script is run?
Using a web server like Apache and connecting to a DynamoDB session
will require a new SSL connection to be established on each request.
This is because PHP doesn't (currently) allow you to reuse cURL
connection handles between requests. Some database drivers do allow
for a persistent connections between requests, which could account for
the performance difference.
If you follow up on the AWS forums thread, an AWS engineer should be able to help you with your issue. This thread is also monitored if you want to keep it open.

Categories