Sending very large string via HTTP POST in PHP - php

I have a mail address on my server which receives emails and pipes them to a php script - piper.php This script then parses the email and retrieves from, to, subject and message parts correctly. Once that is complete, I'd like this script to post this information to a remote CakePHP website to be stored in a MySQL database. I have all of this working except for the posting part.
To post, I use the code given at this link using cURL but I have two questions (code is below for reference only)
function curl_post_async($url, $params)
{
foreach ($params as $key => &$val) {
if (is_array($val)) $val = implode(‘,’, $val);
$post_params[] = $key.’=’.urlencode($val);
}
$post_string = implode(‘&’, $post_params);
$parts=parse_url($url);
$fp = fsockopen($parts[‘host’],
isset($parts[‘port’])?$parts[‘port’]:80,
$errno, $errstr, 30);
pete_assert(($fp!=0), "Couldn’t open a socket to ".$url." (".$errstr.")");
$out = "POST ".$parts[‘path’]." HTTP/1.1\r\n";
$out.= "Host: ".$parts[‘host’]."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
if (isset($post_string)) $out.= $post_string;
fwrite($fp, $out);
fclose($fp);
}
The code uses URL encoding - I am concerned that this may not work if , for example, my email message exceeds 1MB in text - arent there limits on the size of data that can be sent via URL encoding?
Is there a better way of doing this? Should I establish a PERL script to directly connect to the Remote MySQL database and write? Suggestions are welcome

It's not the CURL.
RFC 2616 does not specify any requirement for post length.
Server has its own post restrictions(Apache for example: http://httpd.apache.org/docs/2.2/mod/core.html#LimitRequestBody). And your php instance could have other options http://php.net/manual/en/ini.core.php#ini.post-max-size.
You could write php(or perl, or python, or bash -- it does not matter) script to connect to mysql directly. But you have to open connections to your database from outside. Remember, that mysql has restriction too: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet.
It's up to you to deside what to do. Both ways seems legit.

Related

How to make a really fast request and not wait for an answer?

We use REST for logging - we need more info than a simple textlog can provide. So we implemented a REST service, that accepts some basic information about the event and then asynchronously fetches more detail from the event's source web. So it's practically a ping with very little additional info. The logger machine than verifies API key, connects to DB and writes the basic info along with pointers to more detailed information.
We have a problem though, that logging may slow down the app a lot, because the connection and waiting for answer takes quite some time (it's fast but when you log 10+ events in one request it's a problem).
The question is:
Is there a way in plain PHP (5.3) to do a request to an URL and NOT wait for any answer, or just wait for HTTP 200 header to be sure?
I think I can make the logging server send HTTP 200 header as soon as it gets the request. But it needs to do some more work then ;)
What you want is called a asynchrnous request.
A solution could look like this (quoted):
function curl_post_async($url, $params)
{
foreach ($params as $key => &$val) {
if (is_array($val)) $val = implode(',', $val);
$post_params[] = $key.'='.urlencode($val);
}
$post_string = implode('&', $post_params);
$parts=parse_url($url);
$fp = fsockopen($parts['host'],
isset($parts['port'])?$parts['port']:80,
$errno, $errstr, 30);
$out = "POST ".$parts['path']." HTTP/1.1\r\n";
$out.= "Host: ".$parts['host']."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
if (isset($post_string)) $out.= $post_string;
fwrite($fp, $out);
fclose($fp);
}
If you need more informations take a look at:
http://petewarden.typepad.com/searchbrowser/2008/06/how-to-post-an.html
Here are two tricks maybe helpful.
I. Continue execute after http connection closed. Which means you can close the connection immediately when the main function finished, then continue with your log process.
<?php
ignore_user_abort(true);//avoid apache to kill the php running
ob_start();//start buffer output
echo "show something to user";//do something you need -- your main function
session_write_close();//close session file on server side if needed
header("Content-Encoding: none");//send header to avoid the browser side to take content as gzip format
header("Content-Length: ".ob_get_length());//send length header
header("Connection: close");//or redirect to some url
ob_end_flush();flush();//really send content, can't change the order:1.ob buffer to normal buffer, 2.normal buffer to output
//continue do something on server side
ob_start();
sleep(5);//the user won't wait for the 5 seconds
echo 'for log';//user can't see this
file_put_contents('/tmp/process.log', ob_get_contents());
// or call remote server like http://your.log.server/log.php?xxx=yyy&aaa=bbb
ob_end_clean();
?>
II. Use function apache_note to write logs is pretty lightweight choice than insert into DB. Because Apache will open the log file and keep the handle during Apache running. It's stable and really fast.
Apache configuration:
<VirtualHost *:80>
DocumentRoot /path/to/your/web
ServerName your.domain.com
ErrorLog /path/to/your/log/error_log
<Directory /path/to/your/web>
AllowOverride All
Order allow,deny
Allow from all
</Directory>
SetEnvIf Request_URI "/log\.php" mylog
LogFormat "%{mylog}n" log1
CustomLog "|/usr/sbin/cronolog /path/to/logs/mylog/%Y%m%d/mysite.%Y%m%d%H.log" log1 env=mylog
</VirtualHost>
PHP code:
<?php
apache_note('mylog', session_id()); //you can log any data you need, see http://www.php.net/manual/en/function.apache-note.php
Then you can use my tricks both I and II, call URL http://your.log.server/log.php?xxx=yyy&aaa=bbb to log your detail data after connection of main page closed. No extra time cost needed at all.

Connect through HTTPS instead of HTTP

I want to use a simple API and i want to do it in the secure way.
It is currently using sockets and port 80. As far as I know port 80 is open and it doesn't seem such a secure connection.
As the data to send contains user and password i want to use HTTPS instead of HTTP to make it secure.
I was wondering if it is so simple as just changing this line;
$headers = "POST /api/api.php HTTP/1.0\r\n";
For this other line
$headers = "POST /api/api.php HTTPS/1.0\r\n";
And changing the port to 443
Here is the connect function:
// api connect function
function api_connect($Username, $Password, $ParameterArray)
{
// Create the URL to send the message.
// The variables are set using the input from an HTML form
$err = array();
$url = "api.text-connect.co.uk";
$headers = "POST /api/api.php HTTP/1.0\r\n";
$headers .= "Host: ".$url."\r\n";
// Create post string
// Username and Password
$poststring = "Username=".$Username."&";
$poststring .= "Password=".$Password;
// Turn the parameter array into the variables
while (list($Key, $Value)=#each($ParameterArray))
{
$poststring .= "&".$Key."=".urlencode($Value);
}
// Finish off the headers
$headers .= "Content-Length: ".strlen($poststring)."\r\n";
$headers .= "Content-Type: application/x-www-form-urlencoded\r\n";
// Open a socket
$http = fsockopen ($url, 80, $err[0], $err[1]);
if (!$http)
{
echo "Connection to ".$url.":80 failed: ".$err[0]." (".$err[1].")";
exit();
}
// Socket was open successfully, post the data.
fwrite ($http, $headers."\r\n".$poststring."\r\n");
// Read the results from the post
$result = "";
while (!feof($http))
{
$result .= fread($http, 8192);
}
// Close the connection
fclose ($http);
// Strip the headers from the result
list($resultheaders, $resultcode)=split("\r\n\r\n", $result, 2);
return $resultcode;
}
?>
Your code has a huge number of issues regardless if it's using HTTP or HTTPS - implementing an HTTP client (or server) is MUCH more complicated than simply throwing some headers across a socket then sinking the response.
What's particularly bad about this approach is that it will work some of the time - then it will fail and you won't understand why.
Start again using curl.
Doing it this way you only need to change the URL (it also implements a cookie jar, support for header injection, automatic following of redirects, routing via proxies, verification or non-verification of SSL certificates amongst other things).
I was wondering if it is so simple as
No, it isn't. It really, really isn't.
HTTPS is HTTP tunnelled over SSL. So you don't change the content of the HTTP request at all.
You do need to perform all the SSL handshaking before you do the HTTP stuff though.
SSL is crypto, it is therefore hard. Don't try reinventing this wheel. Use a library such as cURL.
curl
and set CURLOPT_SSL_VERIFYPEER = false

PHP Async GET request works on one server, but doesn't on the other

Please see the edits at the bottom for additional information!
I have two servers. Both should be able to call each other with a GET request.
To make the request (it's more firing an event than makeing a request actually) I am using this code:
function URLCallAsync($url, $params, $type='POST')
{
foreach ($params as $key => &$val) {
if (is_array($val)) $val = implode(',', $val);
$post_params[] = $key.'='.urlencode($val);
}
$post_string = implode('&', $post_params);
$parts=parse_url($url);
$fp = fsockopen($parts['host'],
isset($parts['port'])?$parts['port']:80,
$errno, $errstr, 30);
// Data goes in the path for a GET request
if('GET' == $type) $parts['path'] .= '?'.$post_string;
$out = "$type ".$parts['path']." HTTP/1.1\r\n";
$out.= "Host: ".$parts['host']."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
// Data goes in the request body for a POST request
if ('POST' == $type && isset($post_string)) $out.= $post_string;
fwrite($fp, $out);
fclose($fp);
}
I feed the function with the exact same data (but the url) on both servers (I copied the calling file to test it!!) but it only works in one direction!
I write the calls to that function in a log file so I can investigate if something is going wrong.
Server A -> Server B, works exactly as it should, the logfile at server A contains the correct url
Server B -> Server A, only prints the correct information in the logfile of server B, but Server A never receives the request.
What could be the reason for something like this?
edit:
Could it be the differnt kinds of server?
Server A is nginx, Server B is apache.
Server A also has a '~' symbol in it's url, maybe thats the problem?
The parameters of the get request are encoded with php's "urlencode" maybe that creates problems?
I tried around a bit, but the problem is still that the request isn't coming trough to Server A. But from a browser it works perfectly somehow (assuming I enter the correct URL with the parameters).
edit2:
If I exchange "URLCallAsync" with "file_get_contents" it works like it should. But the problem is that file_get_contents is blocking!
So it can only be the function itself. But strangely it works in the opposite direction :(
edit3:
The function "URLCallAsync" runs trough without error, notice or anything else.
It just isn't received by the other server.
What exactly is file_get_contents doing so different???
I got it working.
After a lot of fiddling with wireshark I found that file_get_contents is even simpler than my async function!
It simply omits the Content-Length field completly! It just provides "GET ..." and "Host".
It also uses HTTP/1.0 instead of 1.1, but that didn't change anything.
So the solution is: Also posting the Content-Length header (which had the value 0, since i used GET) will somehow make the server reject the request. I don't know for sure if it was the server that rejected the request, or something else, like a firewall that maybe detected a "malformed" request, but at least the problem is solved.
So next time you send requests, don't provide the Content-Length header if you don't need it :)

How to run the PHP code asynchronous

How can I run PHP code asynchronously without waiting? I have a long run (almost infinite) that should run while server starts and should process asynchronously without waiting.
The possible options I guess are:
Running the code in a web page and keep it open to do that task
Calling the script from some command line utility (I am not sure how) which would process in the background.
I am running the PHP scripts on my local server which will send emails when certain events occur, e.g. birthday reminders.
Please suggest how can I achieve this without opening the page in a browser.
If you wanted to run it from the browser (perhaps you're not familiar with the command line) you could still do it. I researched many solutions for this a few months ago and the most reliable and simplest to implement was the following from How to post an asynchronous HTTP request in PHP
<?php
$params['my_param'] = $a_value;
post_async('http:://localhost/batch/myjob.php', $params);
/*
* Executes a PHP page asynchronously so the current page does not have to wait for it to finish running.
*
*/
function post_async($url, array $params)
{
foreach ($params as $key => &$val) {
if (is_array($val)) $val = implode(',', $val);
$post_params[] = $key.'='.urlencode($val);
}
$post_string = implode('&', $post_params);
$parts=parse_url($url);
$fp = fsockopen($parts['host'],
isset($parts['port'])?$parts['port']:80,
$errno, $errstr, 30);
$out = "POST ".$parts['path']." HTTP/1.1\r\n";
$out.= "Host: ".$parts['host']."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
if (isset($post_string)) $out.= $post_string;
fwrite($fp, $out);
fclose($fp);
}
Let's say the file above is in your web root directory (/var/www) for example and is called runjobs.php. By visiting http://localhost/runjobs.php your myjob.php file would start to run. You'd probably want to add some output to the browser to let you know it was submitted successfully and it wouldn't hurt to add some security if your web server is open to the rest of the world. One nice thing about this solution if you add some security is that you can start the job anywhere you can find a browser.
Definitely sounds like a job for a cron task. You can set up a php script to do your task once and have the cron run as often as you like. Here's a good writeup on how to have a php script run as a cron task; it's very easy to do.
This isn't really what PHP is designed for. You have to use the PECL threading library to spin off threads that run asynchronously, and I don't recommend it. The new hotness in the async department is node.js - I recommend you look into that and see if you can utilize it. It's designed for light weight, asynchronous network operations, and can be used to fire PHP scripts.
How Can I run PHP code asynchronously
without waiting. I have a long run
(almost inifinite) that should run
while server starts and should process
asynchrously without waiting.
Assuming a typical LAMP system, you can start a PHP daemon from the command line with
root# php yourscript.php &
where yourscript.php contains something similar to
<?php
$log = fopen('/var/log/yourscript.log', 'a+');
// ### check if we are running already omitted
while(true) {
// do interesting stuff and log it.
// don't be a CPU hog
sleep(1);
}
?>
Embellishments:
To make your script directly executable: chmod +x yourscript.php
and add #!/usr/bin/php to the beginning of yourscript
To start with apache, you should add that command to your apache startup script (usually apachectl) and be sure to add code to kill it when apache stops.
The check if you are already running involves a file with your PID in /var/locks/
and something like system('/bin/ps '.$thePID); It also makes the kill instruction easier to write.
thanks Todd Chaffee but it is not working for me so i edited your code i hope you will not mind and may be it will also help others with this technique
cornjobpage.php //mainpage
<?php
//if you want to call page for multiples time w.r.t array
//then uncomment loop start & end)
?>
<?php
//foreach ($inputkeywordsArr as $singleKeyword) {
$url="http://localhost/projectname/testpage.php";
$params['Keywordname'] = "testValue";//$singleKeyword
post_async($url, $params);
//}//foreach ($inputkeywordsArr end
?>
<?php
/*
* Executes a PHP page asynchronously so the current page does not have to wait for it to finish running.
*
*/
function post_async($url, array $params)
{
foreach ($params as $key => &$val) {
if (is_array($val)) $val = implode(',', $val);
$post_params[] = $key.'='.urlencode($val);
}
$post_string = implode('&', $post_params);
$parts=parse_url($url);
$fp = fsockopen($parts['host'],
isset($parts['port'])?$parts['port']:80,
$errno, $errstr, 30);
$out = "GET ".$parts['path']."?$post_string"." HTTP/1.1\r\n";//you can use POST instead of GET if you like
$out.= "Host: ".$parts['host']."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
fwrite($fp, $out);
fclose($fp);
}
?>
testpage.php
<?
echo $_REQUEST["Keywordname"];//Output > testValue
?>

Why is my script taking so long to retrieve headers?

<?php
set_time_limit(0);
$errorArr = array();
if (!isset($argv[1]))
{
array_push($errorArr, "You forgot to enter a host.");
}
if ((isset($argv[1])) AND (!filter_var($argv[1], FILTER_VALIDATE_IP)))
{
array_push($errorArr, "The host you entered is not a valid IP address.");
}
if (!isset($argv[2]))
{
array_push($errorArr, "You forgot to select a port.");
}
if (!empty($errorArr))
{
echo "You have the following errors:\n";
print_r($errorArr);
die("Syntax is as follows: php {$argv[0]} host port\n");
}
$host = $argv[1];
$port = $argv[2];
echo ":::Connecting...\n";
$fh = fsockopen($host, $port);
if (!$fh)
{
die(":::Connection failed.\n:::Aborting.\n");
}
echo ":::Connected!\n:::Sending headers.\n";
$header = "PROPFIND /webdav/ HTTP/1.1\r\n";
$header .= "Host: {$host}\r\n";
$header .= "User-Agent: BitKinex/3.2.3\r\n";
$header .= "Accept: */*\r\n";
$header .= "Pragma: no-cache\r\n";
$header .= "Cache-Control: no-cache\r\n";
$header .= "Depth: 1\r\n";
$header .= "Content-Length: 220\r\n";
$header .= "Content-Type: text/xml\r\n\r\n\r\n";
if (!fwrite($fh, $header))
{
die(":::Couldn't send headers. Aborting.\n");
}
$exHeader = explode("\r\n", $header);
foreach ($exHeader as $ecHeader)
{
echo "<<<{$ecHeader}\n";
}
echo "\n:::Retrieving syntax...\n";
while(1)
{
while ($data = fgets($fh, 512))
{
echo ">>>{$data}";
flush();
}
}
?>
I'm working on a script to connect to WebDAV, upload a file, and disconnect. It connects and sends headers fine, but then it takes forever to retrieve syntax. At times, it takes several minutes, and I can't understand why. Is it a problem in my code?
And yes, I realize there's an infinite while loop there. That's done on purpose, because I haven't figured out how to know when the server is done sending information to me. So I guess that's another question, if anyone could provide insight to that.
Thanks
Your problem is because you are sending the Content-Length header with a value of 220, while not sending any content at all. The server hangs in there expecting content, but it never arrives...
And for your infinite loop thing, you don't need it at all. fgets will return false if the connection has closed. Send the Connection: close header to tell Apache to end the connection after the data has been sent. Your while loop will evaluate to false when the data has been read entirely and the connection has closed, and your loop will exit.
You might want to test it using cURL then. Try this one out: http://curl.haxx.se/mail/archive-2006-02/0000.html
That way you can see if it's server side or code side.
WebDAV can CHUG if the machine you are connecting to handles lots of traffic in general. And especially lots of web traffic. The reasons are complex, but solutions I have used in the past have primarily involved coding around the delay. Either by dumping things in a line to wait, or by pushing things to a box that isn't under heavy load but is more directly connected to the server in question and can push the files to it via different means.
This all requires access, however, and if you have control over the machines you are connecting to, you should be able to reconfigure them to give yourself priority. (which may not be an option if you are connecting to a production web server) However, I've never had to deal with this in PHP. So the problem certainly could be caused by other reasons.

Categories