PHP send SOAP response early? - php

Well this is an old issue I've been dealing with and still no solution, so trying a new approach.
How can I send th SOAP response early (Before script execution ends)?
These issues are cause when the ACK file is not sent before 30 seconds as the process takes longer to complete then the allotted time.
flush() not working, get this error:
org.xml.sax.SAXParseException: XML document structures must start and end within the same entity.
without the flush() I get this
org.xml.sax.SAXParseException: Premature end of file.
The script process can takeover 180 seconds to complete and the server waiting for the response only wait for about 30 seconds before timing out (Which cause the above error).
any thoughts as to how I can fix this?
Here is some of the code: This is how I accept and send the ACK file for the imcoming SOAP request
$data = 'php://input';
$content = file_get_contents($data);
if($content) {
respond('true');
} else {
respond('false');
}
The respond function
function respond($tf) {
$ACK = <<<ACK
<?xml version = "1.0" encoding = "utf-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Body>
<notifications xmlns="http://soap.sforce.com/2005/09/outbound">
<Ack>$tf</Ack>
</notifications>
</soapenv:Body>
</soapenv:Envelope>
ACK;
print trim($ACK);
}
PHP uses a single thread processing approach and will not send back the ACK file until the thread has completed it's processing. Is there some way to close the socket after the ACK submission and continue the processing so I don't get these timeout issues on the sending server?

As far as I know we can't do anything about the 30 second limit and the output won't be flushed before the script finishes. Can you try splitting your processing logic into 2 pieces?
"Listener" script that accepts messages, logs the job to be done and sends ACK instantly, then quits.
Actual processing on your side which might take longer.
The "job to be done" could be a newly spawned process (check out the comments for the pcntl_fork() function but they don't look too promising to me) or some way to store the data in file or database and periodically process it with another script, for example scheduled to run every 5 minutes?
If 60 seconds would somehow save you, you could rewrite your outbound message into a callout from Apex. Basically you then write your own SOAP envelope and send it to any http address. You could put it in a trigger. See here for limitations of this approach though.
Other ranting:
I don't think your ACK really "matters" to salesforce. Outbound messages are just notifications. Do you rely somewhere else in your application on Ack = true/false? If not - listener that blindly sends ack=true & schedules the job might really be the way to go ;)
From the other queston I understood you basically just need to store updates in DB on your side. You realize you shouldn't use OM for audit purposes, right? (Link, search for "audit").
Wouldn't it be simpler to make PHP the active side? make it query Salesforce for [SELECT Id, Name FROM Account WHERE LastModifiedDate > :lastTimeYouQueried]? That way you can take all the time you want to process the results :)

a) (optional) you can use set_time_limit() for increase the time limit of the execution time.
b) You must increase the time to the wsdl client object, for example :
$clientwsdl->setOpt('timeout', 300); // if you are using PEAR:SOAP.
Most wsdl class allow to define the timeout.
c) and not, you can't response early using SOAP, not at least using one call. Take in consideration that SOAP return a XML, so a partial XML usually is invalid (it miss the closing tag).
d) Alternate, you can use other method than SOAP, for example reading a url:
$fp=fopen("http://www.mysite.com/url.php","r");
where url.php return the columns (or some kind of value without using xml :
30|50|70|80|20
30|50|70|80|20
30|50|70|80|20

Related

php file loads slow if have many methods from classes

I have some concerns about my PHP file, which is processing too long.
I'm using XAMPP.
The problem is, that when i use too many methods of my classes, the PHP file loads too slow or it executes too slow.
Here's my example code:
class Sample {
public function show() {
echo 'test';
}
}
$classObject = new Sample();
$classObject->show();
When i run the PHP code above, it takes only i think 1.5s, but if i add more method calls like this:
$classObject = new Sample();
$classObject->show();
$classObject->show();
the PHP code takes almost 3s while executing.
Is there a way to solve this problem?
I already found out what is the problem is when i used oop style php while fetching data from my database it takes long secs before it gonna execute but when I used procedural php style it executes faster.
Firstly, you should take a look at some PHP debugging techniques, because i dont see any attempt of debugging in your code sample.
To your problem:
What you've described is an issue, that can be but must not be a fault of PHP. The thing is, you have to understand how request processing works ... in your case (XAMPP + PHP) it goes like this:
WebServer start:
XAMPP starts Apache (or Tomcat/or whatever WebServer you're using)
Apache starts PHP binary
PHP loads the configuration (php.ini)
The request:
You'll start a web browser (doesn't matter which one)
You'll enter a web address (with or without a specific path - URI)
Then happens some DNS things - that's NOT important for you now
The browser sends a plain-text document on the server via the HTTP protocol, which communicates over TCP - also not so important for you at this time
The WebServer receives a HTTP request and begins the processing
The WebServer runs your PHP script in the already running PHP binary and awaits the ouput
The response
Then the WebServer takes the output of your script and sends it back to the browser
The browser decides (by the headers) how (and if) the content will be displayed to you
Then the browser begins displaying the response content - in your case a HTML or a plain-text document
While drawing the document, the browser begins processing all JavaScript and CSS on the way (up to down)
With this knowledge you need to find out WHERE on the way is the delay taking up time.
The first thing you should do, is to take a look on how long your script is being processed, so:
<?php
$start_time = microtime(true);
$classObject = Sample();
$classObject->show();
$classObject->show();
echo 'Processing took: ' . number_format(microtime(true) - $start_time, 6, '.', '') . ' seconds';
Then, you should look into some Developer Tool your browser provides, for example DevTools in Google Chrome (very good for debugging, though not the best) - hit F12 to open it.
You'll see something like this:
Time is the duration between the time, the request was fully sent and the time, the response was fully received
Load is the sum of durations of all the requests that were done at that moment
Finish is the duration between the time the first request was fully sent and the time the last response was fully received
DOMContentLoaded is the duration of rendering the entire document (browser/client side)
When you have all of this specific duartions, you can decide, where probably will be the problem :)

php methods to trigger async function after request end

in certain apps, you sometimes need to do processing that is irrelevant to response. for example send push notifications after chat message etc. such tasks has no effect on response you will return to user.
what is best approach to run such tasks ?
example in an API for a blog, after post is created i want to send 201 to client and end connection. yet afterwords i want to send a curl call to push notification server, or trigger some data analysis and save it to disk. yet i dont want user to wait for such tasks to end.
methods i can think of
1. is sending connect: closed and content-length headers and flush out response, but this is not compatible with all servers and not all browsers.
2. trigger task using php exec function ! ? but how can i pass a json object to that function then :/ ?
so any ideas how we can accomplish this in async behaviour for php in a manner that would works in any server setup ?
You can take an example of how WordPress triggering wp-cron.php functionality by sending HEAD request to wp-cron.php using curl, which is perfectly fitted to your idea of sending the request and not waiting to respond.
I would do it with this code:
Register as many functions as required with register_shutdown_function of php:
register_shutdown_function('background_function_name_1');
register_shutdown_function('background_function_name_2');
write below lines after html end tag(if any) where all output has been printed (adjust time limit as per upper limit of script execution):
ignore_user_abort(true);
set_time_limit(120);
header('Connection: close');
header('Content-Length: ' . ob_get_length());
ob_end_flush();
flush();
Here the server will send output to browser and all the registered functions will be called in the order they were registered.

Http server post data to php-cgi using C++

I'm writing a simple http server and I'd like to use php-cgi to handle POST requests. I write the handlePOST as follows:
void KServer::handlePOST(int sockfd, string file, string pdata){
char* argvs[MAX_PARAMS];
argvs[0] = const_cast<char*>(CGI_PATH);
argvs[1] = const_cast<char*>(file.c_str());
stringstream ss;
ss<<pdata.length();
string clen, sname;
ss>>clen;
clen = "CONTENT_LENGTH=" + clen;
sname = "SCRIPT_FILENAME="+file;
char* env[] = {
"REQUEST_METHOD=POST",
"REDIRECT_STATUS=CGI",
const_cast<char*>(clen.c_str()),
const_cast<char*>(sname.c_str()),
"CONTENT_TYPE=application/x-www-form-urlencoded",
0
};
istringstream stream(pdata);
cin.rdbuf(stream.rdbuf());
execve(argvs[0], argvs, env);
...
I know that php-cgi get POST data from STDIN. And I put the POST data (var. pdata) into cin.rdbuf. However, the program fails to fetch data from STDIN when executing execve. But if I enter the string using console, the program can run correctly.
It is highly improbable that when your custom http server reads the socket, it always manages to read exactly the number of bytes in the header portion of the HTTP message before getting to this point; so that when it executes the external process, the body portion of the message, containing the POST data, is now waiting to be read, on its standard input.
More than likely, you are using the C++ I/O library, or maybe even the C stdio library, to read the standard input, or socket. Which means, of course, that the built-in input buffering, that's employed by iostreams or stdio, has already managed to swallow a good chunk of the POST data from the body of the HTTP message, that immediately follows its header. It's now waiting or you, to continue reading on your std::cin, stdin; or whatever your code is actually reading.
It therefore follows that, sadly, when you execute the external process, it will be dismayed to find that the POST data it expects has already been consumed, either in entirety, or in part, on its standard input. Instead, it's sitting right here, in your process, waiting for you to resume reading your std::cin, stdin, or whatever.
In other words, for this to work, you cannot just execute an external process, and wash your hands of the whole mess. You will need to set up a pipe to the external process's standard input, and it is your responsibility to proceed to meticulously read the body of the HTTP message, that forms the POST data, just like you've just now finished reading its header, and then dump it into the pipe.
That means parsing the Content-Length header, to know exactly how many bytes are in the body, and reading exactly that (because you're not going to get an EOF, and rely on it, since the socket will remain open for your expected HTTP reply).
And, of course, you will also then need to carefully handle everything that happens as a result of writing to a pipe, i.e. gracefully handling the reader shutting down before consuming the entirety of the piped data, resulting in a broken pipe, etc...

Strange timeout behaviour using php soap client

I'm trying to make a proxy-like page that forwards an AJAX request to a SOAP server.
The browser sends 2 requests to the same page (i.e. server.php with different query string) every 10 seconds.
The server makes a soap call to the soap server depending on the query string.
All is working fine.
Then I put a sleep (40 secs) in the soap server to simulate a slow response and I also put a timeout on the caller to abort the call after some seconds.
server.php: Pseudo code:
$timeout = 10;
ini_set("default_socket_timeout", $timeout);
$id = $_GET['id'];
$wsdl= 'http://soapserver/wsdl'
$client = new SoapClient($wsdl,array('connection_timeout'=> $timeout));
print($client->getQuote($id));
If the browser sends an ajax request to http://myserver/server.php?id=IBM
the request stops after the timeout I set.
If I try to make a second call before the first stops, the second one doesn't not respect timeout.
i.e.
Request:
GET http://myserver/server.php?id=IBM
and after 1 second
GET http://myserver/server.php?id=AAP
Response:
after 10 seconds:
No data
after 20 seconds:
No data
I also tried to not use PHP SOAP and use curl instead but I got the same results.
I also tried to open 3 tabs on my browser and call:
http://myserver/server.php?id=IBM
http://myserver/server.php?id=AAP
http://myserver/server.php?id=MSX
The first one stops after 10 seconds, the second after 20 seconds and the third after 30 seconds.
Is this a normal behaviour or I miss something ?
Thanks in advance
You are probably starting sessions, and session_start() blocks a second call to it until the other request has 'freed' the session (in other words: has finished and will not write any data to the session anymore). For time consuming requests, don't start a session if you don't need one, and if you DO need one, get all the data that you need and then call session_write_close() BEFORE you doing the time-consuming thing. If you need to write to the session afterwards, just call session_start() again.

Limiting Parallel/Simultaneous Downloads - How to know if download was cancelled?

I have a simple file upload service, written out in PHP, which also includes a script that controls download speeds by sending limited-sized packets when a user requests a download from this site.
I want to implement a system to limit parallel/simultaneous downloads to 1 per user if they are not premium members. In the download script above, I can use a MySQL database to store a record that has: (1) the user ID; (2) the file ID; (3) when the download was initiated; and (4) when the last packet was sent, which is updated each time this is done (if DL speed is limited to 150 kB/sec, then after every 150 kB, this record is updated, etc.).
However, thus far, the database record will only be deleted once the download has successfully completed — at the end of the script, after the download has been fully served, the record is deleted from the table:
insert DB record;
while (download is being served) {
serve packet of data;
update DB record with current date/time;
}
// Download is now complete
delete DB record;
How will I be able to detect when a download has been cancelled? Would I just have to have a Cron job (or something similar) detect if an existing download record is more than X minutes/hours old? Or is there something else I can do that I'm missing?
I hope I've explained this well enough. I don't think posting specific code is required; I'm interested more in the logistics of how/whether this can be done. If specific is needed, I will gladly provide it.
NOTE: I know how to detect if a file was successfully downloaded; I need to know how to detect if it was cancelled, aborted, or otherwise stopped (and not just paused). This will be useful in stopping parallel downloads, as well as preventing a situation where the user cancels Download #1 and tries to initiate Download #2, only to find that the site claims he is still downloading file #1.
EDIT: You can find my download script here: http://codetidy.com/1319/ — it already supports multi-part downloads and download resuming.
<?php
class DownloadObserver
{
protected $file;
public function __construct($file) {
$this->file = $file;
}
public function send() {
// -> note in DB you've started
readfile($this->file);
}
public function __destruct() {
// download is done, either completed or aborted
$aborted = connection_aborted();
// -> note in DB
}
}
$dl = new DownloadObserver("/tmp/whatever");
$dl->send();
should work just fine. No need for a shutdown_function or any funky self-built connection observation.
You will want to check out the following functions: connection_status(), connection_aborted() and ignore_user_abort() (see the connection handling section of the PHP manual for more info).
Although I can't guarantee the reliability (it's been a while since I've played around with it), with the right combination you should be able to accomplish what you want. There are a few caveats when working with these though, the big one being that if something goes wrong you could end up with stranded PHP scripts running on the server requiring you to kill Apache to stop them.
The following should give you a good idea of how to do it (adapted from the PHP code examples and a couple of the comments):
<?php
//Set PHP not to cancel execution if the connection is aborted
//and drop the time limit to allow for big file downloads
ignore_user_abort(true);
set_time_limit(0);
while(true){
//See the ignore_user_abort() docs re having to send data
echo chr(0);
//Make sure the data gets flushed properly or the connection check won't work
flush();
ob_flush();
//Check then connection status and exit loop if aborted
if(connection_status() != CONNECTION_NORMAL || connection_aborted()) break;
//Just to provide some spacing in this example
sleep(1);
}
file_put_contents("abort.txt", "aborted\n", FILE_APPEND);
//Never hurts to ensure that the script halts execution
die();
Obviously for how you would be using it the data being sent would simply be the download data chunk (just make sure you flush the buffer properly to ensure the data is actually sent). As far as I'm aware, there is no way of making a distinction between pausing and aborting/stopping. Pause/resume functionality (and multi-part downloading - i.e. how download managers accelerate downloads) relies on the "Range" header, basically requesting byte x to byte y of the file. So if you want to allow resumable downloads you'll have to deal with that too.
There is no HTTP "cancel" signal that is sent by default. So, it looks like you will need to decide on a timeout, the length of time a connection can sit without sending/receiving another packet. If you are sending rather small packets (as I presume you are) keep the timeout short for best effect.
In your while condition you will need to check the age of the last timestamp update, if its too old, stop sending the file.

Categories