php script can be stopped? - php

I'm trying to make a paypal IPN system, this is a system of paypal to automatically check money transfers. They provide a basic system script to do it.
The system is easy, you get $_POST[] on your script, and then open a socket versus paypal, and they response to you valid or invalid word in the socket.
My problem is that opening the socket, 50% of times i'm getting connection lost. When the script connect, I don't have any problem. So I changed it to 20 trys, instead 1:
<?
//...
mail("mi#mail.com", "subject", "executing", "some headers"); //mailme when this is execute
$try = 20;
do{
$fp = #fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 15);
$try--;
}while($try>0 && !$fp);
if (!$fp) { // HTTP ERROR
mail("mi#mail.com", "subject", "error_message_not_connecting", "some headers");
} else {
mail("mi#mail.com", "subject", "connected_reading_socket", "some headers");
//fputs(..); and the loop reading working.
}
?>
In my test, it works now 100% of severals trys. But in real transfers, it doesn't work 20-30% of times. I'm getting the 1st mail, but never the second one in that fails.
I'm thinking.. If paypal only open the connection to my server 1 second, can the php script stop after some trys, and stop going on? or any idea what is wrong here?

Sending the mail can fail too, especially if you have network issues. You should log the failure conditions, for both mail() as well as your fsockopen, so you can revisit them afterwards.
Also, your fsockopen can get stuck. You have a 15 second timeout and you try 20 times, so your script will work for 20*15=300 seconds = 5 minutes, which is probably longer than your PHP script timeout -> PHP would abort your script mid-process, right? Max execution time is only 30 seconds by default in PHP.

A PHP script can be stopped with exit;.
You can pause the php script proccessing with sleep(nr_sec).

I used to get similar problems. Strange behavior when usin sockets.
Better use CURL instead, it's more stable.
http://leepeng.blogspot.com/2006/04/standard-paypal-php-integration.html

I found the error. A php can be stopped when a users close the conection to the server (usually by click stop button on browser, or in this case a socket closed by paypal).
There are 3 ways to stop a script.
1-by finish the script
2-by user closeing the conection to the server
3-by timeout
I used the function ignore_user_abort(true), and I dont have more problems.
http://php.net/manual/en/function.ignore-user-abort.php

Related

Best way to execute commands to TCP/IP console from PHP

Before I start, excuse my english, I'm from Holland :)
I have a question regarding the use of PHP's fsockopen.
My Prerequisites
So basically, I have a Windows program running in the background which has a remote console over TCP/IP that I need to connect to so I can execute a few commands. I am able to connect to that console with KiTTY, and execute my commands without any problems.
My Solution
So the issue I have right now, is that I need to be able to execute these commands from the browser. I have searched the interwebs for best ways to do this and what I found was to use PHP's fsockopen to connect to my console. The code I tried is as follows:
$SOCKET = fsockopen("127.0.0.1", 12101, $errno, $errstr);
if($SOCKET){
echo "Connected!";
}
$firstRead = fread($SOCKET, 8000);
echo($firstRead);
And using fputs to send a command:
fputs($SOCKET, "HELP \r\n");
And after, reading out my response with this:
$response = fread($SOCKET, 8000);
echo $response;
The Problem(s)
But I have encountered a few weird problems when testing this.
As soon as I execute a command like "HELP", I can see from my KiTTY session that the command was executed and that I got a response, but when I read out the response with "fread" I get nothing. But when I use a loop to read it out like this, it reads something from the console at the second try almost everytime:
do {
$response = fread($SOCKET, 8000);
$i++;
} while (strlen($response) < 5 || $i < 5);
( Sometimes, it DOES read something from console on first try, but mostly it only reads something on second try ).
The Question
Now my question(s) is(are), why does it behave so strangely? And is it because I am doing something wrong? And is this really the best way to do this?
sidenote
When this works, I need to be able to call these PHP functions ( or something similar ) with a bunch of AJAX requests and get the response to show in the browser. This is an absolute MUST so please keep this in mind when writing a possible answer :)
Thanks everyone!
When you create a socket with fsockopen in PHP you might also want to specify if it is blocking or non-blocking, in case of a non-blocking socket the function socket_read will return false on error or if the connection was closed, or empty string until some data is received, in case of a blocking socket instead when you read on it, it will block until there is some data to read (or empty string if a timeout is hit).
The behavior you described seems to be non-blocking.
For changing the blocking type there are: socket_set_block and socket_set_nonblock.
When your code with sockets works, there won't be any problems with AJAX requests, but keep in mind to set a timeout in PHP socket, otherwise if the server is down or simply too slow the request will fail with error (a timeout from php if set_time_limit is exceeded, which is a fatal error, or a JavaScript one with the browser timeout constant).
Here are the links to manual of socket_read and socket_write, which I think are more appropriated of fread and fputs.

PHP cURL; Wait for API status change before continuing [duplicate]

I work on a somewhat large web application, and the backend is mostly in PHP. There are several places in the code where I need to complete some task, but I don't want to make the user wait for the result. For example, when creating a new account, I need to send them a welcome email. But when they hit the 'Finish Registration' button, I don't want to make them wait until the email is actually sent, I just want to start the process, and return a message to the user right away.
Up until now, in some places I've been using what feels like a hack with exec(). Basically doing things like:
exec("doTask.php $arg1 $arg2 $arg3 >/dev/null 2>&1 &");
Which appears to work, but I'm wondering if there's a better way. I'm considering writing a system which queues up tasks in a MySQL table, and a separate long-running PHP script that queries that table once a second, and executes any new tasks it finds. This would also have the advantage of letting me split the tasks among several worker machines in the future if I needed to.
Am I re-inventing the wheel? Is there a better solution than the exec() hack or the MySQL queue?
I've used the queuing approach, and it works well as you can defer that processing until your server load is idle, letting you manage your load quite effectively if you can partition off "tasks which aren't urgent" easily.
Rolling your own isn't too tricky, here's a few other options to check out:
GearMan - this answer was written in 2009, and since then GearMan looks a popular option, see comments below.
ActiveMQ if you want a full blown open source message queue.
ZeroMQ - this is a pretty cool socket library which makes it easy to write distributed code without having to worry too much about the socket programming itself. You could use it for message queuing on a single host - you would simply have your webapp push something to a queue that a continuously running console app would consume at the next suitable opportunity
beanstalkd - only found this one while writing this answer, but looks interesting
dropr is a PHP based message queue project, but hasn't been actively maintained since Sep 2010
php-enqueue is a recently (2017) maintained wrapper around a variety of queue systems
Finally, a blog post about using memcached for message queuing
Another, perhaps simpler, approach is to use ignore_user_abort - once you've sent the page to the user, you can do your final processing without fear of premature termination, though this does have the effect of appearing to prolong the page load from the user perspective.
When you just want to execute one or several HTTP requests without having to wait for the response, there is a simple PHP solution, as well.
In the calling script:
$socketcon = fsockopen($host, 80, $errno, $errstr, 10);
if($socketcon) {
$socketdata = "GET $remote_house/script.php?parameters=... HTTP 1.1\r\nHost: $host\r\nConnection: Close\r\n\r\n";
fwrite($socketcon, $socketdata);
fclose($socketcon);
}
// repeat this with different parameters as often as you like
On the called script.php, you can invoke these PHP functions in the first lines:
ignore_user_abort(true);
set_time_limit(0);
This causes the script to continue running without time limit when the HTTP connection is closed.
Another way to fork processes is via curl. You can set up your internal tasks as a webservice. For example:
http://domain/tasks/t1
http://domain/tasks/t2
Then in your user accessed scripts make calls to the service:
$service->addTask('t1', $data); // post data to URL via curl
Your service can keep track of the queue of tasks with mysql or whatever you like the point is: it's all wrapped up within the service and your script is just consuming URLs. This frees you up to move the service to another machine/server if necessary (ie easily scalable).
Adding http authorization or a custom authorization scheme (like Amazon's web services) lets you open up your tasks to be consumed by other people/services (if you want) and you could take it further and add a monitoring service on top to keep track of queue and task status.
http://domain/queue?task=t1
http://domain/queue?task=t2
http://domain/queue/t1/100931
It does take a bit of set-up work but there are a lot of benefits.
If it just a question of providing expensive tasks, in case of php-fpm is supported, why not to use fastcgi_finish_request() function?
This function flushes all response data to the client and finishes the request. This allows for time consuming tasks to be performed without leaving the connection to the client open.
You don't really use asynchronicity in this way:
Make all your main code first.
Execute fastcgi_finish_request().
Make all heavy stuff.
Once again php-fpm is needed.
I've used Beanstalkd for one project, and planned to again. I've found it to be an excellent way to run asynchronous processes.
A couple of things I've done with it are:
Image resizing - and with a lightly loaded queue passing off to a CLI-based PHP script, resizing large (2mb+) images worked just fine, but trying to resize the same images within a mod_php instance was regularly running into memory-space issues (I limited the PHP process to 32MB, and the resizing took more than that)
near-future checks - beanstalkd has delays available to it (make this job available to run only after X seconds) - so I can fire off 5 or 10 checks for an event, a little later in time
I wrote a Zend-Framework based system to decode a 'nice' url, so for example, to resize an image it would call QueueTask('/image/resize/filename/example.jpg'). The URL was first decoded to an array(module,controller,action,parameters), and then converted to JSON for injection to the queue itself.
A long running cli script then picked up the job from the queue, ran it (via Zend_Router_Simple), and if required, put information into memcached for the website PHP to pick up as required when it was done.
One wrinkle I did also put in was that the cli-script only ran for 50 loops before restarting, but if it did want to restart as planned, it would do so immediately (being run via a bash-script). If there was a problem and I did exit(0) (the default value for exit; or die();) it would first pause for a couple of seconds.
Here is a simple class I coded for my web application. It allows for forking PHP scripts and other scripts. Works on UNIX and Windows.
class BackgroundProcess {
static function open($exec, $cwd = null) {
if (!is_string($cwd)) {
$cwd = #getcwd();
}
#chdir($cwd);
if (strtoupper(substr(PHP_OS, 0, 3)) == 'WIN') {
$WshShell = new COM("WScript.Shell");
$WshShell->CurrentDirectory = str_replace('/', '\\', $cwd);
$WshShell->Run($exec, 0, false);
} else {
exec($exec . " > /dev/null 2>&1 &");
}
}
static function fork($phpScript, $phpExec = null) {
$cwd = dirname($phpScript);
#putenv("PHP_FORCECLI=true");
if (!is_string($phpExec) || !file_exists($phpExec)) {
if (strtoupper(substr(PHP_OS, 0, 3)) == 'WIN') {
$phpExec = str_replace('/', '\\', dirname(ini_get('extension_dir'))) . '\php.exe';
if (#file_exists($phpExec)) {
BackgroundProcess::open(escapeshellarg($phpExec) . " " . escapeshellarg($phpScript), $cwd);
}
} else {
$phpExec = exec("which php-cli");
if ($phpExec[0] != '/') {
$phpExec = exec("which php");
}
if ($phpExec[0] == '/') {
BackgroundProcess::open(escapeshellarg($phpExec) . " " . escapeshellarg($phpScript), $cwd);
}
}
} else {
if (strtoupper(substr(PHP_OS, 0, 3)) == 'WIN') {
$phpExec = str_replace('/', '\\', $phpExec);
}
BackgroundProcess::open(escapeshellarg($phpExec) . " " . escapeshellarg($phpScript), $cwd);
}
}
}
PHP HAS multithreading, its just not enabled by default, there is an extension called pthreads which does exactly that.
You'll need php compiled with ZTS though. (Thread Safe)
Links:
Examples
Another tutorial
pthreads PECL Extension
UPDATE: since PHP 7.2 parallel extension comes into play
Tutorial/Example
reference manual
This is the same method I have been using for a couple of years now and I haven't seen or found anything better. As people have said, PHP is single threaded, so there isn't much else you can do.
I have actually added one extra level to this and that's getting and storing the process id. This allows me to redirect to another page and have the user sit on that page, using AJAX to check if the process is complete (process id no longer exists). This is useful for cases where the length of the script would cause the browser to timeout, but the user needs to wait for that script to complete before the next step. (In my case it was processing large ZIP files with CSV like files that add up to 30 000 records to the database after which the user needs to confirm some information.)
I have also used a similar process for report generation. I'm not sure I'd use "background processing" for something such as an email, unless there is a real problem with a slow SMTP. Instead I might use a table as a queue and then have a process that runs every minute to send the emails within the queue. You would need to be warry of sending emails twice or other similar problems. I would consider a similar queueing process for other tasks as well.
It's a great idea to use cURL as suggested by rojoca.
Here is an example. You can monitor text.txt while the script is running in background:
<?php
function doCurl($begin)
{
echo "Do curl<br />\n";
$url = 'http://'.$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'];
$url = preg_replace('/\?.*/', '', $url);
$url .= '?begin='.$begin;
echo 'URL: '.$url.'<br>';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($ch);
echo 'Result: '.$result.'<br>';
curl_close($ch);
}
if (empty($_GET['begin'])) {
doCurl(1);
}
else {
while (ob_get_level())
ob_end_clean();
header('Connection: close');
ignore_user_abort();
ob_start();
echo 'Connection Closed';
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush();
flush();
$begin = $_GET['begin'];
$fp = fopen("text.txt", "w");
fprintf($fp, "begin: %d\n", $begin);
for ($i = 0; $i < 15; $i++) {
sleep(1);
fprintf($fp, "i: %d\n", $i);
}
fclose($fp);
if ($begin < 10)
doCurl($begin + 1);
}
?>
There is a PHP extension, called Swoole.
Although it might not be enabled, it is available on my hosting for being enabled at click of a button.
Worth checking it out. I haven't had time to use it yet, as I was searching here for info, when I stumbled across it and thought it worth sharing.
Unfortunately PHP does not have any kind of native threading capabilities. So I think in this case you have no choice but to use some kind of custom code to do what you want to do.
If you search around the net for PHP threading stuff, some people have come up with ways to simulate threads on PHP.
If you set the Content-Length HTTP header in your "Thank You For Registering" response, then the browser should close the connection after the specified number of bytes are received. This leaves the server side process running (assuming that ignore_user_abort is set) so it can finish working without making the end user wait.
Of course you will need to calculate the size of your response content before rendering the headers, but that's pretty easy for short responses (write output to a string, call strlen(), call header(), render string).
This approach has the advantage of not forcing you to manage a "front end" queue, and although you may need to do some work on the back end to prevent racing HTTP child processes from stepping on each other, that's something you needed to do already, anyway.
If you don't want the full blown ActiveMQ, I recommend to consider RabbitMQ. RabbitMQ is lightweight messaging that uses the AMQP standard.
I recommend to also look into php-amqplib - a popular AMQP client library to access AMQP based message brokers.
Spawning new processes on the server using exec() or directly on another server using curl doesn't scale all that well at all, if we go for exec you are basically filling your server with long running processes which can be handled by other non web facing servers, and using curl ties up another server unless you build in some sort of load balancing.
I have used Gearman in a few situations and I find it better for this sort of use case. I can use a single job queue server to basically handle queuing of all the jobs needing to be done by the server and spin up worker servers, each of which can run as many instances of the worker process as needed, and scale up the number of worker servers as needed and spin them down when not needed. It also let's me shut down the worker processes entirely when needed and queues the jobs up until the workers come back online.
i think you should try this technique it will help to call as many as pages you like all pages will run at once independently without waiting for each page response as asynchronous.
cornjobpage.php //mainpage
<?php
post_async("http://localhost/projectname/testpage.php", "Keywordname=testValue");
//post_async("http://localhost/projectname/testpage.php", "Keywordname=testValue2");
//post_async("http://localhost/projectname/otherpage.php", "Keywordname=anyValue");
//call as many as pages you like all pages will run at once independently without waiting for each page response as asynchronous.
?>
<?php
/*
* Executes a PHP page asynchronously so the current page does not have to wait for it to finish running.
*
*/
function post_async($url,$params)
{
$post_string = $params;
$parts=parse_url($url);
$fp = fsockopen($parts['host'],
isset($parts['port'])?$parts['port']:80,
$errno, $errstr, 30);
$out = "GET ".$parts['path']."?$post_string"." HTTP/1.1\r\n";//you can use POST instead of GET if you like
$out.= "Host: ".$parts['host']."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
fwrite($fp, $out);
fclose($fp);
}
?>
testpage.php
<?
echo $_REQUEST["Keywordname"];//case1 Output > testValue
?>
PS:if you want to send url parameters as loop then follow this answer :https://stackoverflow.com/a/41225209/6295712
PHP is a single-threaded language, so there is no official way to start an asynchronous process with it other than using exec or popen. There is a blog post about that here. Your idea for a queue in MySQL is a good idea as well.
Your specific requirement here is for sending an email to the user. I'm curious as to why you are trying to do that asynchronously since sending an email is a pretty trivial and quick task to perform. I suppose if you are sending tons of email and your ISP is blocking you on suspicion of spamming, that might be one reason to queue, but other than that I can't think of any reason to do it this way.

PHP fputs "waits" until the end of the script

Currently I am trying to develop a PHP script used as a publicly available part of a client/server application. The php script should be used to authenticate users with a one-time token.
The other part of the application is a java program, which offers a telnet socket for other applications to connect to. Authentication is done through this telnet connection.
The java part is already working - but I still have a huge problem with the PHP part.
In php, I have opened a connection to the telnet port of the java program, which works so far. After the connection is initialized, the java program waits for input from the PHP script in order to authenticate the user.
After the authentication process has been finished, it returns a String to the PHP script (or any other program connected to its telnet server) which the PHP script should output.
Before I explain my problem, this is the part of the PHP script where the actual communication happens:
$tnconn = fsockopen("localhost", 53135, $errno, $errstr, 2);
if(!$tnconn) {
echo "SERVER_UNAVAILABLE";
die();
} else {
$data = $p_ip." ".$p_name." ".$p_token;
fputs($tnconn, $data);
while (true) {
if(($telnet_response = fgets($tnconn)) == false) {
break;
}
}
}
echo $telnet_response;
It seems like the fputs() statement is executed after the loop even tho it should happen before it starts - else the java application couldn't get the data that is passed to the php script, but it is needed to authenticate users.
Right after the data was received, the telnet server would output the String to indicate whether authentication was successful or not.
I tried temporarily removing the loop and the data was successfully passed with fputs() so I assume php waits until the whole script is finished and then executes the function.
How can I make it send the data before the loop?
Thank you in advance.
The issue is probably that you need to send a \n at the end of your data string so the telnet server knows you have sent a full sequence of data. Otherwise it is most likely sitting there waiting for more input.
Try:
$data = $p_ip." ".$p_name." ".$p_token . "\n";

PHP fsockopen() painfully slow

I'm using fsockopen() to call a number of connections in a list to see the online status of various ip/host and ports ...
<?php
$socket = #fsockopen($row[2], $row[3], $errnum, $errstr, 1);
if ($errnum >= 1) { $status = 'offline'; } else { $status = 'online';}
fclose($socket);
if works, I'm not complaining about that, but I have approximately 15 ip/ports that i'm retrieving in a list (php for() command..). I was wondering if there is a better way to do this? This way is VERY slow!?! It is taking about 1-2 minutes for the server to come back with a response for all of them..
Update:
<?php
$socket = #fsockopen("lounge.local", "80", $errnum, $errstr, 30);
if ($errnum >= 1) { $status = 'offline'; } else { $status = 'online'; }
?>
It will display in a list: "ReadyNAS AFP readynas.local:548 online"
I don't know what more I can tell you? It just takes forever to load the collection of results...
From my own experience:
This code:
$sock=fsockopen('www.site.com', 80);
is slower compared to:
$sock=fsockopen(gethostbyname('www.site.com'), 80);
Tested in PHP 5.4. If doing many connections at the same time one could keep host resolution result and re-use it, to further reduce script time execution, for example:
function myfunc_getIP($host) {
if (isset($GLOBALS['my_cache'][$host])) {
return $GLOBALS['my_cache'][$host];
}
return $GLOBALS['my_cache'][$host]=gethostbyname($host);
}
$sock=fsockopen(myfunc_getIP('www.site.com'), 80);
If you plan to "ping" some URL, I would advise doing it with curl, why? you can use curl to send pings in parallel, have a look at this -> http://www.php.net/manual/en/function.curl-multi-init.php. In a previous project, it was supposed to feed Real Time Data to our server, we used to ping hosts to see if they are alive or not and Curl was the only option that helped us.
Its an advice, may not be a right solution for your problem.
The last parameter to fsockopen() is the timeout, set this to a low value to make the script complete faster, like this:
fsockopen('192.168.1.93', 80, $errNo, $errStr, 0.01)
Have you compared the results of fsockopen(servername) versus fsockopen(ip-address)? If the timeout parameter does not change a thing, the problem may be in your name server. If fsockopen with an IP address is faster, you'll have to fix your name server, or add the domains to /etc/hosts file.
I would recommend doing this a bit different.
Put this hosts in a table in a database something like:
++++++++++++++++++++++++++++++++++++
| host | port | status | timestamp |
++++++++++++++++++++++++++++++++++++
And move the status checking part in a cron script that you run it once every 5 minutes or how often you want.
This script will check the host:port and update status and timestamp for each record and in your page you will just do a db query and show the host, its status and when was last checked (something like: 1minute ago, etc...)
This way your page will load fast.
According to the php manual, there's a timeout parameter. Try setting it to a lower value.
Edit: To add to Daniel's answer, nmap might be the best tool to use. Set it up with a cron job to scan and update your records every X minutes. Something like
$ for ip in $(seq 6 8);
do
port_open=$(nmap -oG - -p 80 10.1.0.$ip|grep open|wc -l);
echo "10.1.0.$ip:$port_open";
done
10.1.0.6:1
10.1.0.7:1
10.1.0.8:0
I had an issue where fsockopen requests were slow, but wget was really snappy. In my case, it was happening because the hostname had both an ipv4 and ipv6 address, but ipv6 was down. So it took 20 or so seconds on each request for the ipv6 to time out.

PHProxy hanging on response

I've been working on a PHProxy server for some time (you can see my recent posts) and I'm at a point where I have everything working except this problem.
do
{
$data = #fread($_socket, 8192);
$_response_body .= $data;
}
while (isset($data{0}));
unset($data);
My proxy server logs into a server running IIS without the user's intervention (you had to verify credentials somewhere else). Upon logging into this site the header requests are constructed and sent but the response waits for 120 seconds on this section of code. After that long period the proxy continues correctly as it is supposed to. The response that I'm waiting on is just a Object has moved here page that gives me a new location. I've verified headers are correct via Wireshark and LiveHttpHeaders. Again, everything IS working, it just takes forever to load this particular page.
Can any PHP developers give me a hint as to what I should be checking for malfunctions?
Thanks,
EDIT:
[17-Jul-2010 12:33:17] BEFORE RESPONSE
[17-Jul-2010 12:35:17] AFTER RESPONSE
It takes 120 seconds exactly. Is something timing out?
This code significantly increases response time, but doesn't identify the main problem of where/who/what is timing out to begin with.
stream_set_timeout($_socket, 1);
do
{
$data = #fread($_socket, 8192); // silenced to avoid the "normal" warning by a faulty SSL connection
$_response_body .= $data;
}
while (isset($data{0}));
unset($data);

Categories