phpseclib - endless downloading - php

I'm using phpseclib 0.3.1 for working with remote SFTP server. I have a script which downloads cover images from the SFTP, save them on my server and make updates in database.
I run this script for 7000 images and after nearly 10-12 minutes, it looks like script has stopped (but eventually I have found out that script entered an endless loop)
After some investigation, I have found the following details:
function get($remote_file, $local_file = false) from SFTP.php is called for downloading image file
In this function _get_sftp_packet() is called in while(true) loop.
In _get_sftp_packet() there is a call of _get_channel_packet(NET_SFTP_CHANNEL);
And in _get_channel_packet() there is a call of $response = $this->_get_binary_packet();
My problem that this $response is empty string. In function _get_sftp_packet() length of this response is used as a decrement and if function returns empty string (or length 0) - I will never get out of the loop in _get_sftp_packet()
Did anyone face this problem? What empty response means for _get_binary_packet() function?
I will appreciate any help.

It's probably an issue with the window size handling. An issue that has been fixed for a while now.
You're running 0.3.1? The latest version is 0.3.10. You're like 5 versions behind.

Related

Using PHP to read real time serial data

I am using a Raspberry Pi to read real time RS232 data via a USB port using a Prolific 2303 adaptor.
I want to parse this and display it in a web page.
My first approach was to use a python script. This could read and parse the data but passing it to a PHP web page proved a problem. Either I could use GET to send it or save it to a file and read it with PHP. The latter is a non starter with an SD card, it's not going to last too long and the former might involve file storage anyway.
I then tried a PHP approach;
$device = "/dev/ttyUSB0";
$fp = fopen($device,"r") or die();
while(true){
$xml = fread($fp,"400");
print_r($xml);
}
fclose($fp);
print_r(); produces:
CC128-v0.110011909:27:3016.70000951000660008500024 CC128-v0.110011909:27:3616.70000951000670008600024 CC128-v0.110011909:27:4216.70000951000680008700027 CC128-v0.110011909:27:4816.70000951000680008600024 CC128-v0.110011909:27:5516.70000951000680008800024
This is 5 bursts of XML stripped of all its tags. The complete XML stream is 340 characters long, 57600-N-8-1 sent every 6 seconds.
Then it crashes with "Fatal error: Maximum execution time of 30 seconds exceeded in ~/meter.php on line 79" Sometimes there is missing or corrupted data, too.
If I use $xml = fgets($fp); I get no data.
The stream I am expecting, as read using a Python is:
['<msg><src>CC128-v0.11</src><dsb>00114</dsb><time>17:45:02</time><tmpr>16.8</tmpr><sensor>0</sensor><id>00095</id><type>1</type><ch1><watts>00065</watts></ch1><ch2><watts>00093</watts></ch2><ch3><watts>00024</watts></ch3></msg>\r\n']
I tried to use PECL-DIO but could not locate all the dependencies. Apparently, it is deprecated for the current version of Debian and will be removed from the next. I don't know if that refers to Buster, Bullseye or Bookworm. I also tried using php_serial.class which I found on GIT but could not get a complete XML file output or even the stripped down data only stream.
What am I missing to get a PHP variable updated every 6 seconds?

2006 MySQL server has gone away while saving object

Im getting this error "General error: 2006 MySQL server has gone away" when saving an object.
Im not going to paste the code since it way too complicated and I can explain with this example, but first a bit of context:
Im executing a function via Command line using Phalcon tasks, this task creates a Object from a Model class and that object calls a casperjs script that performs some actions in web page, when it finishes it saves some data, here's where sometimes I get mysql server has gone away, only when the casperjs takes a bit longer.
Task.php
function doSomeAction(){
$object = Class::findFirstByName("test");
$object->performActionOnWebPage();
}
In Class.php
function performActionOnWebPage(){
$result = exec ("timeout 30s casperjs somescript.js");
if($result){
$anotherObject = new AnotherClass();
$anotherObject->value = $result->value;
$anotherObject->save();
}
}
It seems like the $anotherObject->save(); method is affected by the time exec ("timeout 30s casperjs somescript.js"); takes to get an answer, when it shouldn`t.
Its not a matter of the data saved since it fails and saves succesfully with the same input, the only difference I see is the time casperjs takes to return a value.
It seems like if for some reason phalcon opens the MySQL conection during the whole execution of the "Class.php" function, provoking the timeout when casperjs takes too long, does this make any sense? Could you help me to fix it or find a workaround to this?
Problem seems that either you are trying to fetch heavy data in single packet than allowed in your mysql config file or your wait_timeout variable value is not set properly as per your code requirement.
check your wait_timeout and max_allowed_packet values, you can check by below command-
SHOW GLOBAL VARIABLES LIKE 'wait_timeout';
SHOW GLOBAL VARIABLES LIKE 'max_allowed_packet';
And increase these values as per your requirement in your my.cnf (linux) or my.ini (windows) config file and restart mysql service.

Matlab urlread error within timer function

I am using a timer function in matlab to continuously execute a certain script. Within this script, I am using urlread to retrieve data from webservices, which works like a charm.
I am now trying to use urlread to execute a simple http-request within this script to insert data into a mysql-database. Thus, I simply specify the url-string and define the value to be parsed to the php parser.
Code-within script being executed in timer-function:
db_url = 'http://someurl/update.php?value=';
db_url = strcat(db_url,num2str(value));
urlread(db_url);
clear db_url
My problem is the following: When I run the timer, it works fine for one execution, but then stops displaying the following error:
"Either this URL could not be parsed or the protocol is not supported."
What is going wrong? When I check my mysql database, I see that one new line has been added to my database, which means it generally works, just won't execute multiple times within the timer.
Any idea what is going wrong? Many thanks in advance!
I figured out what the problem was. The value variable is an array with increasing in size each iteration. Thus, what I needed to do was specify value(end), like so:
db_url = 'http://someurl/update.php?value=';
db_url = strcat(db_url,num2str(value(end)));
urlread(db_url);
clear db_url

PHP exec - echo output line by line during progress

I'm trying to find a way in which I can echo out the output of an exec call, and then flush that to the screen while the process is running. I have written a simple PHP script which accepts a file upload and then converts the file if it is not the appropriate file type using FFMPEG. I am doing this on a windows machine. Currently my command looks like so:
$cmd = "ffmpeg.exe -i ..\..\uploads\\".$filename." ..\..\uploads\\".$filename.".m4v 2>&1";
exec( $cmd, $output);
I need something like this:
while( $output ) {
print_r( $output);
ob_flush(); flush();
}
I've read about using ob_flush() and flush() to clear the output buffer, but I only get output once the process has completed. The command works perfectly, It just doesn't update the Page while converting. I'd like to have some output so the person knows what's going on.
I've set the time out
set_time_limit( 10 * 60 ); //5 minute time out
and would be very greatful if someone could put me in the right direction. I've looked at a number of solutions which come close one Stackoverflow, but none seem to have worked.
Since the exec call is a blocking call you have no way of using buffers to get status.
Instead you could redirect the output in the system call to a log file. Let the client query the server for progress update in which case the server could parse the last lines of the log file to get information about current progress and send it back to the client.
exec() is blocking call, and will NOT return control to PHP until the external program has terminated. That means you cannot do anything to dump the output on a line-by-line basis because PHP is suspended while the external app is running.
For what you want, you need to use proc_open, which returns a filehandle you can read from in a loop. e.g.
$fh = proc_open('.....');
while($line = fgets($fh)) {
print($line);
flush();
}
There are two problems with this approach:
The first is that, as #Marc B notes, the fact that exec will block until it's finished. You'll have to devise some way of measuring progress.
The second is that using ob_flush() in this way amounts to holding the connection between server & client open and dribbling the data out a little at a time. This is not something that the HTTP protocol was designed for and while it might work sometimes, it's not going to work consistently - different browsers and different servers will time out differently. The better way to do it is via AJAX calls: using Javascript's setTimeout() function (or setInterval()), make a call to the server periodically and have the server send back a progress report.

PHP Memory differing server to server

I have a hefty PHP script.
So much so that I have had to do
ini_set('memory_limit', '3000M');
set_time_limit (0);
It runs fine on one server, but on another I get: Out of memory (allocated 1653342208) (tried to allocate 71 bytes) in /home/writeabo/public_html/propturk/feedgenerator/simple_html_dom.php on line 848
Both are on the same package from the same host, but different servers.
Above Problem solved new problem below for bounty
Update: The script is so big because it rawls a site and parsers data from 252 pages, including over 60,000 images, which it makes two copies of. I have since broken it down into parts.
I have another problem now though. when I am writing the image from outside site to server like this:
try {
$imgcont = file_get_contents($va); // $va is an img src from an array of thousands of srcs
$h = fopen($writeTo,'w');
fwrite($h,$imgcont);
fclose($h);
} catch(Exception $e) {
$error .= (!isset($error)) ? "error with <img src='" . $va . "' />" : "<br/>And <img src='" . $va . "' />";
}
All of a sudden it goes to a 500 internal server error page and I have to do it again, at which point it works, because files are only copied it they don't already exist. Is there anyway I can receive the 500 response code and send it back it to the url to make it go again? As this is to all be an automated process?
If this is memory related, I would personally use copy() rather than file_get_contents(). It supports the file wrappers the same way, and I don't see any advantage in loading the whole file in memory just to write it back on the filesystem.
Otherwise, your error_log might give you more information as of why the 500 happens.
There are three parties involved here:
Remote - The server(s) that contain the images you're after
Server - The computer that is running your php script
Client - Your home computer if you are running the script from a web browser, or the same computer as the server if you are running it from Cron.
Is the 500 error you are seeing being generated by 'Remote' and seen by 'Server' (i.e. the images are temporarily unavailable);
Or is it being generated by 'Server' and seen by 'Client' (i.e. there is a problem with your script).
If it is being generated by 'Remote', then see Ali's answer for how to retry.
If it is being generated by your script on 'Server', then you need to identify exactly what the error is - the php error logs should give you more information. I can think of two likely causes:
Reaching PHP's time limit. PHP will only spend a certain amount of time working before returning a 500 error. You can set this to a higher value, or regularly re-set the timer with a call to set_time_limit(), but that won't work if your server is configured in safe mode.
Reaching PHP's memory limit. You seem to have encoutered this already, but worth making sure you're script still isn't eating lots of memory. Consider outputing debug data (possibly only if you set $config['debug_mode'] = true or something). I'd suggest:
try {
echo 'Getting '.$va.'...';
$imgcont = file_get_contents($va); // $va is an img src from an array of thousands of srcs
$h = fopen($writeTo,'w');
fwrite($h,$imgcont);
fclose($h);
echo 'saved. Memory usage: '.(memory_get_usage() / (1024 * 1024)).' <br />';
unset($imgcont);
} catch(Exception $e) {
$error .= (!isset($error)) ? "error with <img src='" . $va . "' />" : "<br/>And <img src='" . $va . "' />";
}
I've also added a line to remove the image from memory, incase PHP isn't doing this correctly itself (in theory that line shouldn't be necessary).
You can avoid both problems by making your script process fewer images at a time and calling it regularly - either using Cron on the server (the ideal solution, although not all shared webhosts allow this), or some software on your desktop computer. If you do this, make sure you consider what will happen if there are two copies of the script running at the same time - will they both fetch the same image at the same time?
So it sounds like you're running this process via a web browser. I'm guessing that you may be getting the 500 error from Apache timing out somehow after a certain period of time or the process dies or something funky. I would suggest you do one of the following:
A) Move the image downloading to a background process, you can run the crawl script in the browser which will write the urls of the images to be downloaded to the db or something and another script will fire up via cron and fetch all the images. You could also have this script work in batches of 100 or so at a time to keep memory consumption down
B) Call the script directly from the command line (this is really the preferred method for something like this anyway, and you should still probably separate the image fetching to another script)
C) If the command line is not an option for some reason, have your browser loaded script touch a file, and have a cron that runs every minute and looks for the file to exist. Then it fires up your script, you can have the output written to a file for you to check later or send an email when it's completed
Is there anyway I can receive the 500 response code and send it back it to the url to make it go again? As this is to all be an automated process?
Here's the simple version of how I would do it:
function getImage($va, $writeTo, $retries = 3)
{
while ($retries > 0) {
if ($imgcont = file_get_contents($va)) {
file_put_contents($writeTo, $imgcont);
return true;
}
$retries--;
}
return false;
}
This doesn't create the file unless we successfully get our image file, and will retry three times by default. You will of course need to add any require exception handling, error checking, etc.
I would definitely stop using file_get_contents() and write the files in chunks, like this:
$read = fopen($url, 'rb');
$write = fope($local, 'wb');
$chunk = 8096;
while (!feof($read)) {
fwrite($write, fread($read, $chunk));
}
fclose($fp);
This will be nicer to your server, and should hopefully solve your 500 problems. As for "catching" a 500 error, this is simply not possible. It is an irretrievable error thrown by your script and written to the client by the web server.
I'm with Swish, this is not really the kind of task that PHP is intended for - you'de be much better using some sort of server side scripting.
Is there anyway I can receive the 500 response code and send it back it to the url to make it go again?
Have you considered using another library? Fetching files from an external server seems to me more like a job for curl or ftp than file_get_content &etc. If the error is external, and you're using curl, you can detect the 500 return code and handle it appropriately without crashing. If not, then maybe you should split your program into two files - one of which fetches a single file/image, and the other that uses curl to repeatedly call the first one. Unless the 500 error means that all php execution crashes, you would be able to detect the failure and handle it.
Something like this pseudocode:
file1.php:
foreach(list_of_files as filename){
do {
x = call_curl('file2.php', filename);
}
while(x == 500);
}
file2.php:
filename=$_GET['filename'];
results = use_curl_to_get_page(filename);
echo results;
Thanks for all your input. I had seperated everything by the time I wrote this question, so the crawler, fired the image grabber, etc.
I took on board the solution to split the number of images, and that also helped.
I also added a try, catch round the file read.
This was only being called from the browser during testing, but now that it is all up and running it is going to be a cron job.
Thanks Swish and Benubird for your particularly detailed and educational answers. Unfortunately I had no cooperation with the developers on the backend where the images are coming from (long and complicated story).
Anyway, all good now so thanks. (Swish how do you call a script from the command line, my knowledge of this field is severely lacking?)

Categories