This question has two parts:
Part I - restriction?
I'm able to store data to my DB with this:
www.mysite.com/myscript.php?testdata=abc123
This works for a short string (eg 'abc123') and the page echos what was written to the DB; however, if the [testdata=] string is longer than 512 chars and i check the database, it shows a row has been added but it's blank and also my echo statement in the script doesn't display the input string.
N.B. I'm on a shared server and have emailed my host to see if it's a restriction.
Part II - best practice?
If i can get past the above hurdle, I want to use a string that's ~15k chars long created in a desktop app that concatenates the [testdata=] string from various parameters; what's the best way to send a long string in PHP POST?
Thanks in advance for your help, i'm not too savvy with PHP.
Edit: Table config:
Edit2: Row anomaly with long string > 512 chars:
Edit3: here's my PHP script, if it helps:
<?
include("connect.php");
$data = $_GET['testdata'];
$result = mysql_query("INSERT INTO test (testdata) VALUES ('$data')");
if ($result) // Check result
{
echo $data;
}
else echo "Error ".$mysqli->error;
mysql_close(); ?>
POST is definitely the method you want to use, and your best bet with that will be with cURL. Something like this should work:
$ch = curl_init();
curl_setopt( $ch, CURLOPT_URL, "http://www.mysite.com/myscript.php" );
curl_setopt( $ch, CURLOPT_POST, TRUE );
curl_setopt( $ch, CURLOPT_POSTFIELDS, $my_really_long_string );
$data = curl_exec( $ch );
You'll need to modify the above to include additional cURL options as per your environment, but something like this is what you'd be looking for.
You'll want to make sure that your DB field is long enough to hold the really long string as well.
Answer 1 Yes, max length of url has restriction. See more:
What is the maximum possible length of a query string?
Answer 2 You can send your string like simple varible ($_POST). Check only settings for max vals of inputing/exectuting in php.ini.
Related
I have a solr query that has been working perfectly:
$ch = curl_init();
$ch_searchURL = "$base_url/$collection/select?q=$s&wt=json&indent=true";
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, $ch_searchURL);
$rawData = curl_exec($ch);
$json = json_decode($rawData,true);
Initially, my $s variable was literally one thing: e.g. ?q=name:brian, but my user base wanted the ability to search multiple things at once, so I started to build that in:
?q=name:("brian"+OR+"mike"+OR+"james"+OR+"emma"+OR+"luke")
It then got to the point where they wanted to search 5,000 things at once, which caused this method of building out the solr GET query to fail as the literal URL length was longer than the max allowed length of ~2,000, so I thought using a POST might work, which I accomplished by adding the following lines:
$ch_searchURL = "$base_url/$collection/select";
$multiline_q = "q=$s&wt=json&indent=true";
curl_setopt($ch, CURLOPT_POSTFIELDS, $multline_q);
This seemed to allow me to search for around 500 items at a time - (which would still, in GET world, cause a URL length of around 4,000) - so better than the GET method, but once I go past that number of items, the solr query fails again.
Because I'm POSTing (maybe?), I don't get any error response from solr, so I don't know what's causing the query to fail, and I can't manually test the query in the browser because it's ~40,000 characters long and won't paste. If I do var_dump($rawData);, I see this:
string(238) " 05 " // or 04, or 08
I've used solr quite a bit with PHP & cURL, but always with the GET method. This is my first foray into using POST. Am I doing something wrong here? Am I just exceeding the actual amount of q options that I can ask solr to retrieve for me, regardless of the method?
Any light that anyone could shed on this would be helpful...
There is no limit on the Solr side - we regularly use Solr in a similar way.
You need to look at the settings for your servlet container (Tomcat, Jetty etc.) and increase the maximum POST size. Look up maxPostSize if you are using Tomcat and maxFormContentSize if you are using Jetty.
source : link
I'm having a little trouble, as i want to encrypt some post data i get from a form and then send them to my nodejs server in json format to put them into a database.
My Problem: i seem to be unable to post the data once it is encrypted. I can post the json string just fine, but not anything more:
My code:
$rsa->loadKey($keydata);
$rsa->setEncryptionMode(CRYPT_RSA_ENCRYPTION_PKCS1);
$encrypted = $rsa->encrypt("test");
$jsonArray = array(
'crypt' => $encrypted
);
$jsonArrayEncoded = json_encode($jsonArray);
echo $jsonArrayEncoded;
$ch = curl_init('https://..........');
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json'));
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $jsonArrayEncoded);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$response = json_decode(curl_exec($ch), true);
curl_close($ch);
I don't even get the echo output. But the string seems to get encrypted, as i can echo that (a lot of charset errors + some random letters and numbers) and if i decode it in the php skript i get the correct result as well. I don't get any console warnings or errors, neither in chrome, nor firefox.
Anything i do wrong? (quite sure there is)
e: I'm using this as crypto library: http://phpseclib.sourceforge.net/rsa/examples.html#encrypt,enc1
edit2: well, as adviced in the comments i converted the string to utf8, but now it seems to be too long to be decrypted with my key... Tough o only encrypted the word "test"...
I think i have to dig deeper...
If anyone knows: for decryption I'm using the Ursa module for node.js with following code:
var buffer = new Buffer(req.body.crypt);
var data = private.decrypt(buffer, 'utf8', 'utf8', ursa.RSA_PKCS1_PADDING);
well, as adviced in the comments i converted the string to utf8, but
now it seems to be too long to be decrypted with my key... Tough o
only encrypted the word "test"...
It'd help to see your updated code that does that. In lieu of doing that...
json_encode doesn't natively handle binary data. My recommendation would be to do something like this:
$jsonArray = array(
'crypt' => bin2hex($encrypted)
);
$jsonArrayEncoded = json_encode($jsonArray);
echo $jsonArrayEncoded;
You'd need to compress it back down to binary, though, after you json decode'd it in Java.
Alternatively, you could do base64_encode and base64 decode it later.
The concern I'd have with utf8 encoding is that PHP's internal string type isn't utf8. If Java's is then that could cause problems it seems.
The json_encoding function has numerous flags that you can pass to it, which enable the function to parse particular character sets. The following call might solve the problems you are having
json_encode($jsonArray, JSON_UNESCAPED_SLASHES | JSON_HEX_APOS | JSON_HEX_QUOT | JSON_HEX_AMP );
this is just so bang head on wall situation. this pattern works perfectly in javascript. and i have no idea what to do.
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://yugioh.wikia.com/wiki/List_of_Yu-Gi-Oh!_BAM_cards');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$chHtml = curl_exec($ch);
curl_close($ch);
$patt = '/<table class="wikitable sortable card-list">[\s\S]*?<\/table/im'; //////////////this
preg_match($patt, $chHtml, $matches);
is the problem line
if i make it greedy
[\s\S]*
it works fine but it goes till the last
There is nothing wrong with the pattern, the problem is that you need a larger backtrack limit than the default.
Explaining:
In regex problems like that always check for errors using the preg_last error().
If you use it in the specific response from the site you submitted, since this is a resource problem and smaller texts do not raise the error, you will see that you are getting a PREG_BACKTRACK_LIMIT_ERROR.
Solution:
To overcome this limit you can raise it with the following in the start of your script:
ini_set ('pcre.backtrack_limit', 10000000);
Is it possible to use file_get_contents() to download a portion of data. For example, if I'm downloading a text file that is 2MB, and I only want the first 5 bytes, is this possible?
Sure. The additional arguments allow you to specify a portion of the file. See example #3 on the manual page:
<?php
// Read 14 characters starting from the 21st character
$section = file_get_contents('./people.txt', NULL, NULL, 20, 14);
var_dump($section);
?>
Here, the last two arguments limit the amount of data returned to just the portion of interest.
Note: The offset argument is a little unpredictable with remote files, as stated also on the manual page:
Seeking (offset) is not supported with remote files. Attempting to seek on non-local files may work with small offsets, but this is unpredictable because it works on the buffered stream.
function ranger($url, $bytes){
$headers = array(
"Range: bytes=0-".$bytes
);
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_HTTPHEADER, $headers);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
return curl_exec($curl);
}
$url = "http://example.com/textfile.txt";
$raw = ranger($url, 5);
echo $raw;
Keep in mind that Range header must be supported by server. With fgc I think it is impossibru, even if it is, you should use cURL.
If I download a file from a website using:
$html = file_get_html($url);
Then how can I know the size, in kilobyes, of the HTML string? I want to know, because I want to skip files over 100Kb.
If you do file_get_contents, you've already gotten the whole file.
If you mean "skip processing", rather than "skip retrieval", you can just get the length of the string: strlen($html). For kilobytes, divide that by 1024.
This is imprecise because the string may contain UTF-8 characters over one byte in length, and very small files will actually occupy a FS block instead of their byte length, but it's probably good enough for the arbitrary-threshold cutoff you're looking for.
To skip fetching large files, you want to use the cURL library.
<?php
function get_content_length($url) {
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_NOBODY, 1);
$hraw=explode("\r\n",curl_exec($ch));
curl_close($ch);
$hdrs=array();
foreach($hraw as $hdr) {
$a=explode(": ", trim($hdr));
$hdrs[$a[0]]=$a[1];
}
return (isset($hdrs['Content-Length'])) ? $hdrs['Content-Length'] : FALSE;
}
$url="http://www.example.com/";
if (get_content_length($url) < 100000) {
$html = file_get_contents($url);
print "Yes.\n";
} else {
print "No.\n";
}
?>
There may be a more elegant way to pull this information out of curl, but this is what came to mind fastest. YMMV.
Note that setting the CURLOPT options this way makes curl use a "HEAD" rather than "GET" request, so we're not actually fetching this URL twice.
The definition, what a string is, is different between PHP and the intuitive meaning:
"Hällo" (mind the Umlaut) looks like a 5-character String, but to PHP it is really a 6-byte array (assuming UTF8) - PHP doesn't have a notion of a String representing text, it just sees it as a sequence of bytes (The PHP euphemism is "binary safe").
So strlen("Hällo") will be 6 (UTF8).
That said, if you want to skip above 100Kb you probably won't mind if it is 99.5k characters translating to 100k bytes.
file_get_html returns an object to you, the information of how big the string is is lost at that point. Get the string first, the object later:
$html = file_get_contents($url);
echo strlen($html); // size in bytes
$html = str_get_html($html);
You can use mb_strlen to force 8bit or what not and then 1 character = 1 byte