I took a look at this other question. I am looking for a way to do what the OP of that question wants as well, and that is to continue processing php after sending http response, but in Symfony2.
I implemented an event that fires after every kernel termination. So far so good, but what I want is for it to fire after CERTAIN terminations, in specific controller actions, for instance after a form was sent, not every single time at every request. That is because I want to do some heavy tasks at certain times and don't want the end user to wait for the page to load.
Any idea how can I do that?
<?php
namespace MedAppBundle\Event;
use JMS\DiExtraBundle\Annotation\InjectParams;
use JMS\DiExtraBundle\Annotation\Service;
use JMS\DiExtraBundle\Annotation\Tag;
use Psr\Log\LoggerInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Symfony\Component\HttpKernel\KernelEvents;
use Symfony\Component\Console\ConsoleEvents;
use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use JMS\DiExtraBundle\Annotation\Inject;
/**
* Class MedicListener
* #package MedAppBundle\EventListener
* #Service("medapp_test.listener")
* #Tag(name="kernel.event_subscriber")
*/
class TestListener implements EventSubscriberInterface
{
private $container;
private $logger;
/**
* Constructor.
*
* #param ContainerInterface $container A ContainerInterface instance
* #param LoggerInterface $logger A LoggerInterface instance
* #InjectParams({
* "container" = #Inject("service_container"),
* "logger" = #Inject("logger")
* })
*/
public function __construct(ContainerInterface $container, LoggerInterface $logger = null)
{
$this->container = $container;
$this->logger = $logger;
}
public function onTerminate()
{
$this->logger->notice('fired');
}
public static function getSubscribedEvents()
{
$listeners = array(KernelEvents::TERMINATE => 'onTerminate');
if (class_exists('Symfony\Component\Console\ConsoleEvents')) {
$listeners[ConsoleEvents::TERMINATE] = 'onTerminate';
}
return $listeners;
}
}
So far I've subscribed the event to the kernel.terminate one, but obviously this fires it at each request. I made it similar to Swiftmailer's EmailSenderListener
It feels kind of strange that the kernel must listen each time for this event even when it's not triggered. I'd rather have it fired only when needed, but not sure how to do that.
In the onTerminate callback you get an instance of PostResponseEvent as first parameter. You can get the Request as well as the Response from that object.
Then you should be able to decide if you want to run the actual termination code.
Also you can store custom data in the attributes bag of the Request. See this link: Symfony and HTTP Fundamentals
The Request class also has a public attributes property, which holds special data related to how the application works internally. For the Symfony Framework, the attributes holds the values returned by the matched route, like _controller, id (if you have an {id} wildcard), and even the name of the matched route (_route). The attributes property exists entirely to be a place where you can prepare and store context-specific information about the request.
Your code could look something like this:
// ...
class TestListener implements EventSubscriberInterface
{
// ...
public function onTerminate(PostResponseEvent $event)
{
$request = $event->getRequest();
if ($request->attributes->get('_route') == 'some_route_name') {
// do stuff
}
}
// ...
}
Edit:
The kernel.terminate event is designed to run after the response is sent. But the symfony documentation is saying the following (taken from here):
Internally, the HttpKernel makes use of the fastcgi_finish_request PHP function. This means that at the moment, only the PHP FPM server API is able to send a response to the client while the server's PHP process still performs some tasks. With all other server APIs, listeners to kernel.terminate are still executed, but the response is not sent to the client until they are all completed.
Edit 2:
To use the solution from here, you could either directly edit the web/app.php file to add it there (but this is some kind of "hacking core" imo, even though it would be easier to use than the following). Or you could do it like this:
Add a listener to kernel.request event with a high priority and start output buffering (ob_start).
Add a listener to kernel.response and add the header values to the response.
Add another listener with highest priority to kernel.terminate and do the flushing (ob_flush, flush).
Run your code in a separate listener with lower priority to kernel.terminate
I did not try it, but it should actually work.
To solve this issue for some of my use cases I simply create symfony commands to do the heavy tasks, and call them via exec() to make them run in a separate process.
I used these answers to write a Response class that has this functionality:
https://stackoverflow.com/a/28738208/1153227
This implementation will work on Apache and not just PHP FPM. However, to make this work we must prevent Apache from using gzip (by using an invalid Content-Encoding) so it makes sense to have a custom Response class to specify exactly when having an early response is more important than compression.
use Symfony\Component\HttpFoundation\Response;
class EarlyResponse extends Response
{
// Functionality adapted from this answer: https://stackoverflow.com/a/7120170/1153227
protected $callback = null;
/**
* Constructor.
*
* #param mixed $content The response content, see setContent()
* #param int $status The response status code
* #param array $headers An array of response headers
*
* #throws \InvalidArgumentException When the HTTP status code is not valid
*/
public function __construct($content = '', $status = 200, $headers = array(), $callback = null)
{
if (null !== $callback) {
$this->setTerminateCallback($callback);
}
parent::__construct($content, $status, $headers);
}
/**
* Sets the PHP callback associated with this Response.
* It will be called after the terminate events fire and thus after we've sent our response and closed the connection
*
* #param callable $callback A valid PHP callback
*
* #throws \LogicException
*/
public function setTerminateCallback($callback)
{
//Copied From Symfony\Component\HttpFoundation\StreamedResponse
if (!is_callable($callback)) {
throw new \LogicException('The Response callback must be a valid PHP callable.');
}
$this->callback = $callback;
}
/**
* #return Current_Class_Name
*/
public function send() {
if (function_exists('fastcgi_finish_request') || 'cli' === PHP_SAPI) { // we don't need the hack when using fast CGI
return parent::send();
}
ignore_user_abort(true);//prevent apache killing the process
if (!ob_get_level()) { // Check if an ob buffer exists already.
ob_start();//start the output buffer
}
$this->sendContent(); //Send the content to the buffer
static::closeOutputBuffers(1, true); //flush all but the last ob buffer level
$this->headers->set('Content-Length', ob_get_length()); // Set the content length using the last ob buffer level
$this->headers->set('Connection', 'close'); // Close the Connection
$this->headers->set('Content-Encoding', 'none');// This invalid header value will make Apache not delay sending the response while it is
// See: https://serverfault.com/questions/844526/apache-2-4-7-ignores-response-header-content-encoding-identity-instead-respect
$this->sendHeaders(); //Now that we have the headers, we can send them (which will avoid the ob buffers)
static::closeOutputBuffers(0, true); //flush the last ob buffer level
flush(); // After we flush the OB buffer to the normal buffer, we still need to send the normal buffer to output
session_write_close();//close session file on server side to avoid blocking other requests
return $this;
}
/**
* #return Current_Class_Name
*/
public function callTerminateCallback() {
if ($this->callback) {
call_user_func($this->callback);
}
return $this;
}
}
You also need to add a method to your AppKernel.php to make this work (don't forget to add a use statement for your EarlyResponse class)
public function terminate(Request $request, Response $response)
{
ob_start();
//Run this stuff before the terminate events
if ($response instanceof EarlyResponse) {
$response->callTerminateCallback();
}
//Trigger the terminate events
parent::terminate($request, $response);
//Optionally, we can output the beffer that will get cleaned to a file before discarding its contents
//file_put_contents('/tmp/process.log', ob_get_contents());
ob_end_clean();
}
I'm writing a program for my Raspberry Pi, that consists of two main parts:
A C-Program that uses the Spotify-API "Libspotify" to search for music and for playing it.
A PHP-program, running on an apache2-Webserver, to control the Spotify-program from a PC or Smartphone in my local network.
The communication between these two separate programs works over a couple of files.
The C-part for receiving the user inputs is being called once a second and works like this:
void get_input() {
int i = 0, c;
FILE *file;
struct stat stat;
char buffer[INPUT_BUFFER_SIZE];
file = fopen(PATH_TO_COMMUNICATION, "r");
if (file == NULL) {
perror("Error opening PATH_TO_COMMUNICATION");
return;
}
fstat(fileno(file), &stat);
if (stat.st_size > 1) {
while((c = fgetc(file)) != EOF && i < INPUT_BUFFER_SIZE) {
buffer[i] = c;
++i;
}
buffer[i] = '\0';
parse_input(buffer);
}
fclose(file);
clear_file(PATH_TO_COMMUNICATION);
}
So, via fwrite() in PHP, I can send commands to the C-program.
Afterwards, the C-Program parses the input and writes the results in an "results"-file. Once this is done, I write the last contents of the "communication"-file an "last_query"-file, so in PHP i can see, when the entire results are written in the "results":
function search($query) {
write_to_communication("search ".$query);
do { // Wait for results
$file = fopen("../tmp/last_query", "r");
$line = fgets($file);
fclose($file);
time_nanosleep(0, 100000000);
} while($line != $query);
echo get_result_json();
}
It does work already, but I don't like the way at all. There is a lot of polling and unneccessary opening and closing of different files. Also, in the worst case the program needs more than a second until it starts to do something with the user input. Furthermore, there can be race conditions, when the C-program tries to read the file during the PHP-program writes into it.
So, now to my question: What is the "right" way, to achieve a nice and clean communication between the two program parts? Is there some completely different way without the ugly polling and without race conditions? Or can I improve the existing code, so that it gets nicer?
I suppose that you write both the PHP and the C code yourself, and that you have access to compilers and so on? What I would do is not at all start up a different process and use inter-process-communication, but write a PHP C++ extension that does this all for you.
The extension starts up a new thread, and this thread picks up all instruction from an in-memory instruction queue. When you're ready to pick up the result, the final instruction is sent to the thread (the instruction to close down), and when the thread is finally finished, you can pick up the result.
You could use a program like this:
#include <phpcpp.h>
#include <queue>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <unistd.h>
/**
* Class that takes instructions from PHP, and that executes them in CPP code
*/
class InstructionQueue : public Php::Base
{
private:
/**
* Queue with instructions (for simplicity, we store the instructions
* in strings, much cooler implementations are possible)
*
* #var std::queue
*/
std::queue<std::string> _queue;
/**
* The final result
* #var std::string
*/
std::string _result;
/**
* Counter with number of instructions
* #var int
*/
int _counter = 0;
/**
* Mutex to protect shared data
* #var std::mutex
*/
std::mutex _mutex;
/**
* Condition variable that the thread uses to wait for more instructions
* #var std::condition_variable
*/
std::condition_variable _condition;
/**
* Thread that processes the instructions
* #var std::thread
*/
std::thread _thread;
/**
* Procedure to execute one specific instruction
* #param instruction
*/
void execute(const std::string &instruction)
{
// #todo
//
// add your own the implementation, for now we just
// add the instruction to the result, and sleep a while
// to pretend that this is a difficult algorithm
// append the instruction to the result
_result.append(instruction);
_result.append("\n");
// sleep for a while
sleep(1);
}
/**
* Main procedure that runs the thread
*/
void run()
{
// need the mutex to access shared resources
std::unique_lock<std::mutex> lock(_mutex);
// keep looping
while (true)
{
// go wait for instructions
while (_counter == 0) _condition.wait(lock);
// check the number of instructions, leap out when empty
if (_queue.size() == 0) return;
// get instruction from the queue, and reduce queue size
std::string instruction(std::move(_queue.front()));
// remove front item from queue
_queue.pop();
// no longer need the lock
lock.unlock();
// run the instruction
execute(instruction);
// get back the lock for the next iteration of the main loop
lock.lock();
}
}
public:
/**
* C++ constructor
*/
InstructionQueue() : _thread(&InstructionQueue::run, this) {}
/**
* Copy constructor
*
* We just create a brand new queue when it is copied (copy constructor
* is required by the PHP-CPP library)
*
* #param queue
*/
InstructionQueue(const InstructionQueue &queue) : InstructionQueue() {}
/**
* Destructor
*/
virtual ~InstructionQueue()
{
// stop the thread
stop();
}
/**
* Method to add an instruction
* #param params Object representing PHP parameters
*/
void add(Php::Parameters ¶ms)
{
// first parameter holds the instruction
std::string instruction = params[0];
// need a mutex to access shared resources
_mutex.lock();
// add instruction
_queue.push(instruction);
// update instruction counter
_counter++;
// done with shared resources
_mutex.unlock();
// notify the thread
_condition.notify_one();
}
/**
* Method to stop the thread
*/
void stop()
{
// is the thread already finished?
if (!_thread.joinable()) return;
// thread is still running, send instruction to stop (which is the
// same as not sending an instruction at all but just increasing the
// instruction counter, lock mutex to access protected data
_mutex.lock();
// add instruction
_counter++;
// done with shared resources
_mutex.unlock();
// notify the thread
_condition.notify_one();
// wait for the thread to finish
_thread.join();
}
/**
* Retrieve the result
* #return string
*/
Php::Value result()
{
// stop the thread first
stop();
// return the result
return _result;
}
};
/**
* Switch to C context to ensure that the get_module() function
* is callable by C programs (which the Zend engine is)
*/
extern "C" {
/**
* Startup function that is called by the Zend engine
* to retrieve all information about the extension
* #return void*
*/
PHPCPP_EXPORT void *get_module() {
// extension object
static Php::Extension myExtension("InstructionQueue", "1.0");
// description of the class so that PHP knows
// which methods are accessible
Php::Class<InstructionQueue> myClass("InstructionQueue");
// add methods
myClass.method("add", &InstructionQueue::add);
myClass.method("result", &InstructionQueue::result);
// add the class to the extension
myExtension.add(std::move(myClass));
// return the extension
return myExtension;
}
}
You can use this instruction queue from a PHP script like this:
<?php
$queue = new InstructionQueue();
$queue->add("instruction 1");
$queue->add("instruction 2");
$queue->add("instruction 3");
echo($queue->result());
As an example I've only added a silly implementation that sleeps for a while, but you could run your API-calling functions in there.
The extension uses the PHP-CPP library (see http://www.php-cpp.com).
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have a GPS Tracker that connects and send data to a defined public server:port through GPRS connection.
I can define the ip:port of the GPS device
My question is, can I just open a port in my server and listen/save the data received using PHP?
Thanks.
Edit/Update Aug. 16, 2017 :
User and library author <#Navarr> has commented that he has released a new, updated version of the library the code in my original answer was based from. A link to the new code on his github here. Feel free to explore the new code and refer back to the original example here for insight (I have no personally explored/used the new code).
The below code will use the SocketServer.class.php file found here. It is meant to be run as a standalone process which means under Linux I had to make the file executable, then run it from command line using "php my_server.php".
For more information on running php scripts from command line:
http://www.funphp.com/?p=33
First grab the SocketServer.class.php file here:
http://www.phpclasses.org/browse/file/31975.html
Try this to make use of it, then tweak it to handle receiving your own incoming data as needed. Hope it helps.
<?php
require_once("SocketServer.class.php"); // Include the File
$server = new SocketServer("192.168.1.6",31337); // Create a Server binding to the given ip address and listen to port 31337 for connections
$server->max_clients = 10; // Allow no more than 10 people to connect at a time
$server->hook("CONNECT","handle_connect"); // Run handle_connect every time someone connects
$server->hook("INPUT","handle_input"); // Run handle_input whenever text is sent to the server
$server->infinite_loop(); // Run Server Code Until Process is terminated.
function handle_connect(&$server,&$client,$input)
{
SocketServer::socket_write_smart($client->socket,"String? ","");
}
function handle_input(&$server,&$client,$input)
{
// You probably want to sanitize your inputs here
$trim = trim($input); // Trim the input, Remove Line Endings and Extra Whitespace.
if(strtolower($trim) == "quit") // User Wants to quit the server
{
SocketServer::socket_write_smart($client->socket,"Oh... Goodbye..."); // Give the user a sad goodbye message, meany!
$server->disconnect($client->server_clients_index); // Disconnect this client.
return; // Ends the function
}
$output = strrev($trim); // Reverse the String
SocketServer::socket_write_smart($client->socket,$output); // Send the Client back the String
SocketServer::socket_write_smart($client->socket,"String? ",""); // Request Another String
}
Edit: In keeping things relevant and functional for this answer I found it best not to continue to rely on code from an external source that may not always remain available (or at the given URL provided in my link). Therefore, for convenience, I am adding below the code that corresponds to the SocketServer.class.php file I linked to at the top of this post. (Apologies for length and possible lack of indentation/formatting while copy/pasting, I am not the author of the code below).
<?php
/*! #class SocketServer
#author Navarr Barnier
#abstract A Framework for creating a multi-client server using the PHP language.
*/
class SocketServer
{
/*! #var config
#abstract Array - an array of configuration information used by the server.
*/
protected $config;
/*! #var hooks
#abstract Array - a dictionary of hooks and the callbacks attached to them.
*/
protected $hooks;
/*! #var master_socket
#abstract resource - The master socket used by the server.
*/
protected $master_socket;
/*! #var max_clients
#abstract unsigned int - The maximum number of clients allowed to connect.
*/
public $max_clients = 10;
/*! #var max_read
#abstract unsigned int - The maximum number of bytes to read from a socket at a single time.
*/
public $max_read = 1024;
/*! #var clients
#abstract Array - an array of connected clients.
*/
public $clients;
/*! #function __construct
#abstract Creates the socket and starts listening to it.
#param string - IP Address to bind to, NULL for default.
#param int - Port to bind to
#result void
*/
public function __construct($bind_ip,$port)
{
set_time_limit(0);
$this->hooks = array();
$this->config["ip"] = $bind_ip;
$this->config["port"] = $port;
$this->master_socket = socket_create(AF_INET, SOCK_STREAM, 0);
socket_bind($this->master_socket,$this->config["ip"],$this->config["port"]) or die("Issue Binding");
socket_getsockname($this->master_socket,$bind_ip,$port);
socket_listen($this->master_socket);
SocketServer::debug("Listenting for connections on {$bind_ip}:{$port}");
}
/*! #function hook
#abstract Adds a function to be called whenever a certain action happens. Can be extended in your implementation.
#param string - Command
#param callback- Function to Call.
#see unhook
#see trigger_hooks
#result void
*/
public function hook($command,$function)
{
$command = strtoupper($command);
if(!isset($this->hooks[$command])) { $this->hooks[$command] = array(); }
$k = array_search($function,$this->hooks[$command]);
if($k === FALSE)
{
$this->hooks[$command][] = $function;
}
}
/*! #function unhook
#abstract Deletes a function from the call list for a certain action. Can be extended in your implementation.
#param string - Command
#param callback- Function to Delete from Call List
#see hook
#see trigger_hooks
#result void
*/
public function unhook($command = NULL,$function)
{
$command = strtoupper($command);
if($command !== NULL)
{
$k = array_search($function,$this->hooks[$command]);
if($k !== FALSE)
{
unset($this->hooks[$command][$k]);
}
} else {
$k = array_search($this->user_funcs,$function);
if($k !== FALSE)
{
unset($this->user_funcs[$k]);
}
}
}
/*! #function loop_once
#abstract Runs the class's actions once.
#discussion Should only be used if you want to run additional checks during server operation. Otherwise, use infinite_loop()
#param void
#see infinite_loop
#result bool - True
*/
public function loop_once()
{
// Setup Clients Listen Socket For Reading
$read[0] = $this->master_socket;
for($i = 0; $i < $this->max_clients; $i++)
{
if(isset($this->clients[$i]))
{
$read[$i + 1] = $this->clients[$i]->socket;
}
}
// Set up a blocking call to socket_select
if(socket_select($read,$write = NULL, $except = NULL, $tv_sec = 5) < 1)
{
// SocketServer::debug("Problem blocking socket_select?");
return true;
}
// Handle new Connections
if(in_array($this->master_socket, $read))
{
for($i = 0; $i < $this->max_clients; $i++)
{
if(empty($this->clients[$i]))
{
$temp_sock = $this->master_socket;
$this->clients[$i] = new SocketServerClient($this->master_socket,$i);
$this->trigger_hooks("CONNECT",$this->clients[$i],"");
break;
}
elseif($i == ($this->max_clients-1))
{
SocketServer::debug("Too many clients... :( ");
}
}
}
// Handle Input
for($i = 0; $i < $this->max_clients; $i++) // for each client
{
if(isset($this->clients[$i]))
{
if(in_array($this->clients[$i]->socket, $read))
{
$input = socket_read($this->clients[$i]->socket, $this->max_read);
if($input == null)
{
$this->disconnect($i);
}
else
{
SocketServer::debug("{$i}#{$this->clients[$i]->ip} --> {$input}");
$this->trigger_hooks("INPUT",$this->clients[$i],$input);
}
}
}
}
return true;
}
/*! #function disconnect
#abstract Disconnects a client from the server.
#param int - Index of the client to disconnect.
#param string - Message to send to the hooks
#result void
*/
public function disconnect($client_index,$message = "")
{
$i = $client_index;
SocketServer::debug("Client {$i} from {$this->clients[$i]->ip} Disconnecting");
$this->trigger_hooks("DISCONNECT",$this->clients[$i],$message);
$this->clients[$i]->destroy();
unset($this->clients[$i]);
}
/*! #function trigger_hooks
#abstract Triggers Hooks for a certain command.
#param string - Command who's hooks you want to trigger.
#param object - The client who activated this command.
#param string - The input from the client, or a message to be sent to the hooks.
#result void
*/
public function trigger_hooks($command,&$client,$input)
{
if(isset($this->hooks[$command]))
{
foreach($this->hooks[$command] as $function)
{
SocketServer::debug("Triggering Hook '{$function}' for '{$command}'");
$continue = call_user_func($function,$this,$client,$input);
if($continue === FALSE) { break; }
}
}
}
/*! #function infinite_loop
#abstract Runs the server code until the server is shut down.
#see loop_once
#param void
#result void
*/
public function infinite_loop()
{
$test = true;
do
{
$test = $this->loop_once();
}
while($test);
}
/*! #function debug
#static
#abstract Outputs Text directly.
#discussion Yeah, should probably make a way to turn this off.
#param string - Text to Output
#result void
*/
public static function debug($text)
{
echo("{$text}\r\n");
}
/*! #function socket_write_smart
#static
#abstract Writes data to the socket, including the length of the data, and ends it with a CRLF unless specified.
#discussion It is perfectly valid for socket_write_smart to return zero which means no bytes have been written. Be sure to use the === operator to check for FALSE in case of an error.
#param resource- Socket Instance
#param string - Data to write to the socket.
#param string - Data to end the line with. Specify a "" if you don't want a line end sent.
#result mixed - Returns the number of bytes successfully written to the socket or FALSE on failure. The error code can be retrieved with socket_last_error(). This code may be passed to socket_strerror() to get a textual explanation of the error.
*/
public static function socket_write_smart(&$sock,$string,$crlf = "\r\n")
{
SocketServer::debug("<-- {$string}");
if($crlf) { $string = "{$string}{$crlf}"; }
return socket_write($sock,$string,strlen($string));
}
/*! #function __get
#abstract Magic Method used for allowing the reading of protected variables.
#discussion You never need to use this method, simply calling $server->variable works because of this method's existence.
#param string - Variable to retrieve
#result mixed - Returns the reference to the variable called.
*/
function &__get($name)
{
return $this->{$name};
}
}
/*! #class SocketServerClient
#author Navarr Barnier
#abstract A Client Instance for use with SocketServer
*/
class SocketServerClient
{
/*! #var socket
#abstract resource - The client's socket resource, for sending and receiving data with.
*/
protected $socket;
/*! #var ip
#abstract string - The client's IP address, as seen by the server.
*/
protected $ip;
/*! #var hostname
#abstract string - The client's hostname, as seen by the server.
#discussion This variable is only set after calling lookup_hostname, as hostname lookups can take up a decent amount of time.
#see lookup_hostname
*/
protected $hostname;
/*! #var server_clients_index
#abstract int - The index of this client in the SocketServer's client array.
*/
protected $server_clients_index;
/*! #function __construct
#param resource- The resource of the socket the client is connecting by, generally the master socket.
#param int - The Index in the Server's client array.
#result void
*/
public function __construct(&$socket,$i)
{
$this->server_clients_index = $i;
$this->socket = socket_accept($socket) or die("Failed to Accept");
SocketServer::debug("New Client Connected");
socket_getpeername($this->socket,$ip);
$this->ip = $ip;
}
/*! #function lookup_hostname
#abstract Searches for the user's hostname and stores the result to hostname.
#see hostname
#param void
#result string - The hostname on success or the IP address on failure.
*/
public function lookup_hostname()
{
$this->hostname = gethostbyaddr($this->ip);
return $this->hostname;
}
/*! #function destroy
#abstract Closes the socket. Thats pretty much it.
#param void
#result void
*/
public function destroy()
{
socket_close($this->socket);
}
function &__get($name)
{
return $this->{$name};
}
function __isset($name)
{
return isset($this->{$name});
}
}
Yep, using socket_create, socket_bind, socket_listen, and socket_accept
http://www.php.net/manual/en/function.socket-create.php
http://www.php.net/manual/en/function.socket-bind.php
http://www.php.net/manual/en/function.socket-listen.php
http://www.php.net/manual/en/function.socket-accept.php
There are lots of examples on those pages on how to use them.
Yes, PHP offers a socket_listen() function just for that. http://php.net/manual/en/function.socket-listen.php.
Is there a way in PHP to make HTTP calls and not wait for a response? I don't care about the response, I just want to do something like file_get_contents(), but not wait for the request to finish before executing the rest of my code. This would be super useful for setting off "events" of a sort in my application, or triggering long processes.
Any ideas?
The answer I'd previously accepted didn't work. It still waited for responses. This does work though, taken from How do I make an asynchronous GET request in PHP?
function post_without_wait($url, $params)
{
foreach ($params as $key => &$val) {
if (is_array($val)) $val = implode(',', $val);
$post_params[] = $key.'='.urlencode($val);
}
$post_string = implode('&', $post_params);
$parts=parse_url($url);
$fp = fsockopen($parts['host'],
isset($parts['port'])?$parts['port']:80,
$errno, $errstr, 30);
$out = "POST ".$parts['path']." HTTP/1.1\r\n";
$out.= "Host: ".$parts['host']."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
if (isset($post_string)) $out.= $post_string;
fwrite($fp, $out);
fclose($fp);
}
If you control the target that you want to call asynchronously (e.g. your own "longtask.php"), you can close the connection from that end, and both scripts will run in parallel. It works like this:
quick.php opens longtask.php via cURL (no magic here)
longtask.php closes the connection and continues (magic!)
cURL returns to quick.php when the connection is closed
Both tasks continue in parallel
I have tried this, and it works just fine. But quick.php won't know anything about how longtask.php is doing, unless you create some means of communication between the processes.
Try this code in longtask.php, before you do anything else. It will close the connection, but still continue to run (and suppress any output):
while(ob_get_level()) ob_end_clean();
header('Connection: close');
ignore_user_abort();
ob_start();
echo('Connection Closed');
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush();
flush();
The code is copied from the PHP manual's user contributed notes and somewhat improved.
You can do trickery by using exec() to invoke something that can do HTTP requests, like wget, but you must direct all output from the program to somewhere, like a file or /dev/null, otherwise the PHP process will wait for that output.
If you want to separate the process from the apache thread entirely, try something like (I'm not sure about this, but I hope you get the idea):
exec('bash -c "wget -O (url goes here) > /dev/null 2>&1 &"');
It's not a nice business, and you'll probably want something like a cron job invoking a heartbeat script which polls an actual database event queue to do real asynchronous events.
You can use this library: https://github.com/stil/curl-easy
It's pretty straightforward then:
<?php
$request = new cURL\Request('http://yahoo.com/');
$request->getOptions()->set(CURLOPT_RETURNTRANSFER, true);
// Specify function to be called when your request is complete
$request->addListener('complete', function (cURL\Event $event) {
$response = $event->response;
$httpCode = $response->getInfo(CURLINFO_HTTP_CODE);
$html = $response->getContent();
echo "\nDone.\n";
});
// Loop below will run as long as request is processed
$timeStart = microtime(true);
while ($request->socketPerform()) {
printf("Running time: %dms \r", (microtime(true) - $timeStart)*1000);
// Here you can do anything else, while your request is in progress
}
Below you can see console output of above example.
It will display simple live clock indicating how much time request is running:
As of 2018, Guzzle has become the defacto standard library for HTTP requests, used in several modern frameworks. It's written in pure PHP and does not require installing any custom extensions.
It can do asynchronous HTTP calls very nicely, and even pool them such as when you need to make 100 HTTP calls, but don't want to run more than 5 at a time.
Concurrent request example
use GuzzleHttp\Client;
use GuzzleHttp\Promise;
$client = new Client(['base_uri' => 'http://httpbin.org/']);
// Initiate each request but do not block
$promises = [
'image' => $client->getAsync('/image'),
'png' => $client->getAsync('/image/png'),
'jpeg' => $client->getAsync('/image/jpeg'),
'webp' => $client->getAsync('/image/webp')
];
// Wait on all of the requests to complete. Throws a ConnectException
// if any of the requests fail
$results = Promise\unwrap($promises);
// Wait for the requests to complete, even if some of them fail
$results = Promise\settle($promises)->wait();
// You can access each result using the key provided to the unwrap
// function.
echo $results['image']['value']->getHeader('Content-Length')[0]
echo $results['png']['value']->getHeader('Content-Length')[0]
See http://docs.guzzlephp.org/en/stable/quickstart.html#concurrent-requests
/**
* Asynchronously execute/include a PHP file. Does not record the output of the file anywhere.
*
* #param string $filename file to execute, relative to calling script
* #param string $options (optional) arguments to pass to file via the command line
*/
function asyncInclude($filename, $options = '') {
exec("/path/to/php -f {$filename} {$options} >> /dev/null &");
}
Fake a request abortion using CURL setting a low CURLOPT_TIMEOUT_MS
set ignore_user_abort(true) to keep processing after the connection closed.
With this method no need to implement connection handling via headers and buffer too dependent on OS, Browser and PHP version
Master process
function async_curl($background_process=''){
//-------------get curl contents----------------
$ch = curl_init($background_process);
curl_setopt_array($ch, array(
CURLOPT_HEADER => 0,
CURLOPT_RETURNTRANSFER =>true,
CURLOPT_NOSIGNAL => 1, //to timeout immediately if the value is < 1000 ms
CURLOPT_TIMEOUT_MS => 50, //The maximum number of mseconds to allow cURL functions to execute
CURLOPT_VERBOSE => 1,
CURLOPT_HEADER => 1
));
$out = curl_exec($ch);
//-------------parse curl contents----------------
//$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
//$header = substr($out, 0, $header_size);
//$body = substr($out, $header_size);
curl_close($ch);
return true;
}
async_curl('http://example.com/background_process_1.php');
Background process
ignore_user_abort(true);
//do something...
NB
If you want cURL to timeout in less than one second, you can use
CURLOPT_TIMEOUT_MS, although there is a bug/"feature" on "Unix-like
systems" that causes libcurl to timeout immediately if the value is <
1000 ms with the error "cURL Error (28): Timeout was reached". The
explanation for this behavior is:
[...]
The solution is to disable signals using CURLOPT_NOSIGNAL
Resources
curl timeout less than 1000ms always fails?
http://www.php.net/manual/en/function.curl-setopt.php#104597
http://php.net/manual/en/features.connection-handling.php
The swoole extension. https://github.com/matyhtf/swoole
Asynchronous & concurrent networking framework for PHP.
$client = new swoole_client(SWOOLE_SOCK_TCP, SWOOLE_SOCK_ASYNC);
$client->on("connect", function($cli) {
$cli->send("hello world\n");
});
$client->on("receive", function($cli, $data){
echo "Receive: $data\n";
});
$client->on("error", function($cli){
echo "connect fail\n";
});
$client->on("close", function($cli){
echo "close\n";
});
$client->connect('127.0.0.1', 9501, 0.5);
let me show you my way :)
needs nodejs installed on the server
(my server sends 1000 https get request takes only 2 seconds)
url.php :
<?
$urls = array_fill(0, 100, 'http://google.com/blank.html');
function execinbackground($cmd) {
if (substr(php_uname(), 0, 7) == "Windows"){
pclose(popen("start /B ". $cmd, "r"));
}
else {
exec($cmd . " > /dev/null &");
}
}
fwite(fopen("urls.txt","w"),implode("\n",$urls);
execinbackground("nodejs urlscript.js urls.txt");
// { do your work while get requests being executed.. }
?>
urlscript.js >
var https = require('https');
var url = require('url');
var http = require('http');
var fs = require('fs');
var dosya = process.argv[2];
var logdosya = 'log.txt';
var count=0;
http.globalAgent.maxSockets = 300;
https.globalAgent.maxSockets = 300;
setTimeout(timeout,100000); // maximum execution time (in ms)
function trim(string) {
return string.replace(/^\s*|\s*$/g, '')
}
fs.readFile(process.argv[2], 'utf8', function (err, data) {
if (err) {
throw err;
}
parcala(data);
});
function parcala(data) {
var data = data.split("\n");
count=''+data.length+'-'+data[1];
data.forEach(function (d) {
req(trim(d));
});
/*
fs.unlink(dosya, function d() {
console.log('<%s> file deleted', dosya);
});
*/
}
function req(link) {
var linkinfo = url.parse(link);
if (linkinfo.protocol == 'https:') {
var options = {
host: linkinfo.host,
port: 443,
path: linkinfo.path,
method: 'GET'
};
https.get(options, function(res) {res.on('data', function(d) {});}).on('error', function(e) {console.error(e);});
} else {
var options = {
host: linkinfo.host,
port: 80,
path: linkinfo.path,
method: 'GET'
};
http.get(options, function(res) {res.on('data', function(d) {});}).on('error', function(e) {console.error(e);});
}
}
process.on('exit', onExit);
function onExit() {
log();
}
function timeout()
{
console.log("i am too far gone");process.exit();
}
function log()
{
var fd = fs.openSync(logdosya, 'a+');
fs.writeSync(fd, dosya + '-'+count+'\n');
fs.closeSync(fd);
}
You can use non-blocking sockets and one of pecl extensions for PHP:
http://php.net/event
http://php.net/libevent
http://php.net/ev
https://github.com/m4rw3r/php-libev
You can use library which gives you an abstraction layer between your code and a pecl extension: https://github.com/reactphp/event-loop
You can also use async http-client, based on the previous library: https://github.com/reactphp/http-client
See others libraries of ReactPHP: http://reactphp.org
Be careful with an asynchronous model.
I recommend to see this video on youtube: http://www.youtube.com/watch?v=MWNcItWuKpI
class async_file_get_contents extends Thread{
public $ret;
public $url;
public $finished;
public function __construct($url) {
$this->finished=false;
$this->url=$url;
}
public function run() {
$this->ret=file_get_contents($this->url);
$this->finished=true;
}
}
$afgc=new async_file_get_contents("http://example.org/file.ext");
Event Extension
Event extension is very appropriate. It is a port of Libevent library which is designed for event-driven I/O, mainly for networking.
I have written a sample HTTP client that allows to schedule a number of
HTTP requests and run them asynchronously.
This is a sample HTTP client class based on Event extension.
The class allows to schedule a number of HTTP requests, then run them asynchronously.
http-client.php
<?php
class MyHttpClient {
/// #var EventBase
protected $base;
/// #var array Instances of EventHttpConnection
protected $connections = [];
public function __construct() {
$this->base = new EventBase();
}
/**
* Dispatches all pending requests (events)
*
* #return void
*/
public function run() {
$this->base->dispatch();
}
public function __destruct() {
// Destroy connection objects explicitly, don't wait for GC.
// Otherwise, EventBase may be free'd earlier.
$this->connections = null;
}
/**
* #brief Adds a pending HTTP request
*
* #param string $address Hostname, or IP
* #param int $port Port number
* #param array $headers Extra HTTP headers
* #param int $cmd A EventHttpRequest::CMD_* constant
* #param string $resource HTTP request resource, e.g. '/page?a=b&c=d'
*
* #return EventHttpRequest|false
*/
public function addRequest($address, $port, array $headers,
$cmd = EventHttpRequest::CMD_GET, $resource = '/')
{
$conn = new EventHttpConnection($this->base, null, $address, $port);
$conn->setTimeout(5);
$req = new EventHttpRequest([$this, '_requestHandler'], $this->base);
foreach ($headers as $k => $v) {
$req->addHeader($k, $v, EventHttpRequest::OUTPUT_HEADER);
}
$req->addHeader('Host', $address, EventHttpRequest::OUTPUT_HEADER);
$req->addHeader('Connection', 'close', EventHttpRequest::OUTPUT_HEADER);
if ($conn->makeRequest($req, $cmd, $resource)) {
$this->connections []= $conn;
return $req;
}
return false;
}
/**
* #brief Handles an HTTP request
*
* #param EventHttpRequest $req
* #param mixed $unused
*
* #return void
*/
public function _requestHandler($req, $unused) {
if (is_null($req)) {
echo "Timed out\n";
} else {
$response_code = $req->getResponseCode();
if ($response_code == 0) {
echo "Connection refused\n";
} elseif ($response_code != 200) {
echo "Unexpected response: $response_code\n";
} else {
echo "Success: $response_code\n";
$buf = $req->getInputBuffer();
echo "Body:\n";
while ($s = $buf->readLine(EventBuffer::EOL_ANY)) {
echo $s, PHP_EOL;
}
}
}
}
}
$address = "my-host.local";
$port = 80;
$headers = [ 'User-Agent' => 'My-User-Agent/1.0', ];
$client = new MyHttpClient();
// Add pending requests
for ($i = 0; $i < 10; $i++) {
$client->addRequest($address, $port, $headers,
EventHttpRequest::CMD_GET, '/test.php?a=' . $i);
}
// Dispatch pending requests
$client->run();
test.php
This is a sample script on the server side.
<?php
echo 'GET: ', var_export($_GET, true), PHP_EOL;
echo 'User-Agent: ', $_SERVER['HTTP_USER_AGENT'] ?? '(none)', PHP_EOL;
Usage
php http-client.php
Sample Output
Success: 200
Body:
GET: array (
'a' => '1',
)
User-Agent: My-User-Agent/1.0
Success: 200
Body:
GET: array (
'a' => '0',
)
User-Agent: My-User-Agent/1.0
Success: 200
Body:
GET: array (
'a' => '3',
)
...
(Trimmed.)
Note, the code is designed for long-term processing in the CLI SAPI.
For custom protocols, consider using low-level API, i.e. buffer events, buffers. For SSL/TLS communications, I would recommend the low-level API in conjunction with Event's ssl context. Examples:
SSL echo server
SSL client
Although Libevent's HTTP API is simple, it is not as flexible as buffer events. For example, the HTTP API currently doesn't support custom HTTP methods. But it is possible to implement virtually any protocol using the low-level API.
Ev Extension
I have also written a sample of another HTTP client using Ev extension with sockets in non-blocking mode. The code is slightly more verbose than the sample based on Event, because Ev is a general purpose event loop. It doesn't provide network-specific functions, but its EvIo watcher is capable of listening to a file descriptor encapsulated into the socket resource, in particular.
This is a sample HTTP client based on Ev extension.
Ev extension implements a simple yet powerful general purpose event loop. It doesn't provide network-specific watchers, but its I/O watcher can be used for asynchronous processing of sockets.
The following code shows how HTTP requests can be scheduled for parallel processing.
http-client.php
<?php
class MyHttpRequest {
/// #var MyHttpClient
private $http_client;
/// #var string
private $address;
/// #var string HTTP resource such as /page?get=param
private $resource;
/// #var string HTTP method such as GET, POST etc.
private $method;
/// #var int
private $service_port;
/// #var resource Socket
private $socket;
/// #var double Connection timeout in seconds.
private $timeout = 10.;
/// #var int Chunk size in bytes for socket_recv()
private $chunk_size = 20;
/// #var EvTimer
private $timeout_watcher;
/// #var EvIo
private $write_watcher;
/// #var EvIo
private $read_watcher;
/// #var EvTimer
private $conn_watcher;
/// #var string buffer for incoming data
private $buffer;
/// #var array errors reported by sockets extension in non-blocking mode.
private static $e_nonblocking = [
11, // EAGAIN or EWOULDBLOCK
115, // EINPROGRESS
];
/**
* #param MyHttpClient $client
* #param string $host Hostname, e.g. google.co.uk
* #param string $resource HTTP resource, e.g. /page?a=b&c=d
* #param string $method HTTP method: GET, HEAD, POST, PUT etc.
* #throws RuntimeException
*/
public function __construct(MyHttpClient $client, $host, $resource, $method) {
$this->http_client = $client;
$this->host = $host;
$this->resource = $resource;
$this->method = $method;
// Get the port for the WWW service
$this->service_port = getservbyname('www', 'tcp');
// Get the IP address for the target host
$this->address = gethostbyname($this->host);
// Create a TCP/IP socket
$this->socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
if (!$this->socket) {
throw new RuntimeException("socket_create() failed: reason: " .
socket_strerror(socket_last_error()));
}
// Set O_NONBLOCK flag
socket_set_nonblock($this->socket);
$this->conn_watcher = $this->http_client->getLoop()
->timer(0, 0., [$this, 'connect']);
}
public function __destruct() {
$this->close();
}
private function freeWatcher(&$w) {
if ($w) {
$w->stop();
$w = null;
}
}
/**
* Deallocates all resources of the request
*/
private function close() {
if ($this->socket) {
socket_close($this->socket);
$this->socket = null;
}
$this->freeWatcher($this->timeout_watcher);
$this->freeWatcher($this->read_watcher);
$this->freeWatcher($this->write_watcher);
$this->freeWatcher($this->conn_watcher);
}
/**
* Initializes a connection on socket
* #return bool
*/
public function connect() {
$loop = $this->http_client->getLoop();
$this->timeout_watcher = $loop->timer($this->timeout, 0., [$this, '_onTimeout']);
$this->write_watcher = $loop->io($this->socket, Ev::WRITE, [$this, '_onWritable']);
return socket_connect($this->socket, $this->address, $this->service_port);
}
/**
* Callback for timeout (EvTimer) watcher
*/
public function _onTimeout(EvTimer $w) {
$w->stop();
$this->close();
}
/**
* Callback which is called when the socket becomes wriable
*/
public function _onWritable(EvIo $w) {
$this->timeout_watcher->stop();
$w->stop();
$in = implode("\r\n", [
"{$this->method} {$this->resource} HTTP/1.1",
"Host: {$this->host}",
'Connection: Close',
]) . "\r\n\r\n";
if (!socket_write($this->socket, $in, strlen($in))) {
trigger_error("Failed writing $in to socket", E_USER_ERROR);
return;
}
$loop = $this->http_client->getLoop();
$this->read_watcher = $loop->io($this->socket,
Ev::READ, [$this, '_onReadable']);
// Continue running the loop
$loop->run();
}
/**
* Callback which is called when the socket becomes readable
*/
public function _onReadable(EvIo $w) {
// recv() 20 bytes in non-blocking mode
$ret = socket_recv($this->socket, $out, 20, MSG_DONTWAIT);
if ($ret) {
// Still have data to read. Append the read chunk to the buffer.
$this->buffer .= $out;
} elseif ($ret === 0) {
// All is read
printf("\n<<<<\n%s\n>>>>", rtrim($this->buffer));
fflush(STDOUT);
$w->stop();
$this->close();
return;
}
// Caught EINPROGRESS, EAGAIN, or EWOULDBLOCK
if (in_array(socket_last_error(), static::$e_nonblocking)) {
return;
}
$w->stop();
$this->close();
}
}
/////////////////////////////////////
class MyHttpClient {
/// #var array Instances of MyHttpRequest
private $requests = [];
/// #var EvLoop
private $loop;
public function __construct() {
// Each HTTP client runs its own event loop
$this->loop = new EvLoop();
}
public function __destruct() {
$this->loop->stop();
}
/**
* #return EvLoop
*/
public function getLoop() {
return $this->loop;
}
/**
* Adds a pending request
*/
public function addRequest(MyHttpRequest $r) {
$this->requests []= $r;
}
/**
* Dispatches all pending requests
*/
public function run() {
$this->loop->run();
}
}
/////////////////////////////////////
// Usage
$client = new MyHttpClient();
foreach (range(1, 10) as $i) {
$client->addRequest(new MyHttpRequest($client, 'my-host.local', '/test.php?a=' . $i, 'GET'));
}
$client->run();
Testing
Suppose http://my-host.local/test.php script is printing the dump of $_GET:
<?php
echo 'GET: ', var_export($_GET, true), PHP_EOL;
Then the output of php http-client.php command will be similar to the following:
<<<<
HTTP/1.1 200 OK
Server: nginx/1.10.1
Date: Fri, 02 Dec 2016 12:39:54 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: close
X-Powered-By: PHP/7.0.13-pl0-gentoo
1d
GET: array (
'a' => '3',
)
0
>>>>
<<<<
HTTP/1.1 200 OK
Server: nginx/1.10.1
Date: Fri, 02 Dec 2016 12:39:54 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: close
X-Powered-By: PHP/7.0.13-pl0-gentoo
1d
GET: array (
'a' => '2',
)
0
>>>>
...
(trimmed)
Note, in PHP 5 the sockets extension may log warnings for EINPROGRESS, EAGAIN, and EWOULDBLOCK errno values. It is possible to turn off the logs with
error_reporting(E_ERROR);
Concerning "the Rest" of the Code
I just want to do something like file_get_contents(), but not wait for the request to finish before executing the rest of my code.
The code that is supposed to run in parallel with the network requests can be executed within a the callback of an Event timer, or Ev's idle watcher, for instance. You can easily figure it out by watching the samples mentioned above. Otherwise, I'll add another example :)
I find this package quite useful and very simple: https://github.com/amphp/parallel-functions
<?php
use function Amp\ParallelFunctions\parallelMap;
use function Amp\Promise\wait;
$responses = wait(parallelMap([
'https://google.com/',
'https://github.com/',
'https://stackoverflow.com/',
], function ($url) {
return file_get_contents($url);
}));
It will load all 3 urls in parallel.
You can also use class instance methods in the closure.
For example I use Laravel extension based on this package https://github.com/spatie/laravel-collection-macros#parallelmap
Here is my code:
/**
* Get domains with all needed data
*/
protected function getDomainsWithdata(): Collection
{
return $this->opensrs->getDomains()->parallelMap(function ($domain) {
$contact = $this->opensrs->getDomainContact($domain);
$contact['domain'] = $domain;
return $contact;
}, 10);
}
It loads all needed data in 10 parallel threads and instead of 50 secs without async it finished in just 8 secs.
Here is a working example, just run it and open storage.txt afterwards, to check the magical result
<?php
function curlGet($target){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $target);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec ($ch);
curl_close ($ch);
return $result;
}
// Its the next 3 lines that do the magic
ignore_user_abort(true);
header("Connection: close"); header("Content-Length: 0");
echo str_repeat("s", 100000); flush();
$i = $_GET['i'];
if(!is_numeric($i)) $i = 1;
if($i > 4) exit;
if($i == 1) file_put_contents('storage.txt', '');
file_put_contents('storage.txt', file_get_contents('storage.txt') . time() . "\n");
sleep(5);
curlGet($_SERVER['HTTP_HOST'] . $_SERVER['SCRIPT_NAME'] . '?i=' . ($i + 1));
curlGet($_SERVER['HTTP_HOST'] . $_SERVER['SCRIPT_NAME'] . '?i=' . ($i + 1));
Here is my own PHP function when I do POST to a specific URL of any page....
Sample: *** usage of my Function...
<?php
parse_str("email=myemail#ehehehahaha.com&subject=this is just a test");
$_POST['email']=$email;
$_POST['subject']=$subject;
echo HTTP_POST("http://example.com/mail.php",$_POST);***
exit;
?>
<?php
/*********HTTP POST using FSOCKOPEN **************/
// by ArbZ
function HTTP_Post($URL,$data, $referrer="") {
// parsing the given URL
$URL_Info=parse_url($URL);
// Building referrer
if($referrer=="") // if not given use this script as referrer
$referrer=$_SERVER["SCRIPT_URI"];
// making string from $data
foreach($data as $key=>$value)
$values[]="$key=".urlencode($value);
$data_string=implode("&",$values);
// Find out which port is needed - if not given use standard (=80)
if(!isset($URL_Info["port"]))
$URL_Info["port"]=80;
// building POST-request: HTTP_HEADERs
$request.="POST ".$URL_Info["path"]." HTTP/1.1\n";
$request.="Host: ".$URL_Info["host"]."\n";
$request.="Referer: $referer\n";
$request.="Content-type: application/x-www-form-urlencoded\n";
$request.="Content-length: ".strlen($data_string)."\n";
$request.="Connection: close\n";
$request.="\n";
$request.=$data_string."\n";
$fp = fsockopen($URL_Info["host"],$URL_Info["port"]);
fputs($fp, $request);
while(!feof($fp)) {
$result .= fgets($fp, 128);
}
fclose($fp); //$eco = nl2br();
function getTextBetweenTags($string, $tagname) {
$pattern = "/<$tagname ?.*>(.*)<\/$tagname>/";
preg_match($pattern, $string, $matches);
return $matches[1];
}
//STORE THE FETCHED CONTENTS to a VARIABLE, because its way better and fast...
$str = $result;
$txt = getTextBetweenTags($str, "span"); $eco = $txt; $result = explode("&",$result);
return $result[1];
<span style=background-color:LightYellow;color:blue>".trim($_GET['em'])."</span>
</pre> ";
}
</pre>
ReactPHP async http client
https://github.com/shuchkin/react-http-client
Install via Composer
$ composer require shuchkin/react-http-client
Async HTTP GET
// get.php
$loop = \React\EventLoop\Factory::create();
$http = new \Shuchkin\ReactHTTP\Client( $loop );
$http->get( 'https://tools.ietf.org/rfc/rfc2068.txt' )->then(
function( $content ) {
echo $content;
},
function ( \Exception $ex ) {
echo 'HTTP error '.$ex->getCode().' '.$ex->getMessage();
}
);
$loop->run();
Run php in CLI-mode
$ php get.php
Symfony HttpClient is asynchronous https://symfony.com/doc/current/components/http_client.html.
For example you can
use Symfony\Component\HttpClient\HttpClient;
$client = HttpClient::create();
$response1 = $client->request('GET', 'https://website1');
$response2 = $client->request('GET', 'https://website1');
$response3 = $client->request('GET', 'https://website1');
//these 3 calls with return immediately
//but the requests will fire to the website1 webserver
$response1->getContent(); //this will block until content is fetched
$response2->getContent(); //same
$response3->getContent(); //same
Well, the timeout can be set in milliseconds,
see "CURLOPT_CONNECTTIMEOUT_MS" in http://www.php.net/manual/en/function.curl-setopt