PHP - ftp_get only works once - php

I'm connecting to an ftp server that I have no control over, and I'm pretty sure is using something old and outdated due to other issues I've run into.
I'm simply using this code in a loop to get all the files in a directory.
ftp_get($this->conn_id, $remote, $local, FTP_ASCII);
The first time all goes well, but after that I get this error thrown for each file I try to get: "There is already an active transaction"
I've tried both passive & active, as well as a nonblocking get with no luck. It's the exact same code I use to connect to other FTP servers and get files with no problem.
edit: Oddly enough, closing the connection, sleeping 3 seconds, and creating a new connection between each get yields the same results...
EDIT: Solved. Turns out that despite the errors, the files are still being got. The catch block was just catching the error so I didn't realize it. I'll just ignore that error.

Try using ftp_fget instead and saving the file before trying to get another one.

It seems like "There is already an active transaction" may mean it is still transfering data when you try to use that connection again. Maybe use a sleep() function after puts and gets to give the transaction time to finish and see if that makes a difference. You shouldn't have to do that for php but I would try it just to rule it out as a possible issue.

Related

Connection to Artemis via Stomp breaks when trying to read big messages using SSL

My code is fairly simple. I'm using the library over at https://github.com/stomp-php/stomp-php, and try to read messages from an Artemis queue. It's just a simple $stomp->read();.
Expected behaviour:
I get one message from the queue, or get told that there are no messages in the queue
What is happening:
The read method throws an exception (see below)
When we don't connect using SSL, with a basic TCP connection, without a certificate, everything works perfectly fine. It happens only when we connect with the ssl scheme, the SSL port, with the certificate.
The exception is: Was not possible to read data from stream., thrown in [stomp-php directory]/src/Network/Connection.php line 473.
Here is the context for the stream connection:
ssl:
peer_name: '[censored]'
cafile: '[censored certificate path].cer'
The certificate file exists and is read correctly (since when I change the path there's an exception thrown even before trying to send a message). The peer name is also correct, since another one triggers another error telling me that the peer name is incorrect.
My test is simple: I have a file that sends the messages, and a file that reads them. The sending always works when I don't subscribe to the queue. The reading is kinda messy.
New information: the reading seems to work when I remove 5 specific messages from the sending. That means if I send one of those 5, the reading throws the exception. If there are only the other messages but none of these 5, the reading works great. I would assume that the messages are in cause, but again, when I'm not connecting using SSL, everything works correctly.
New information again: every message that I have to not send in order to not have an error has a big amount of lines. I tried to send one of them again and remove every node but one from its content (XML): it worked correctly. So I tried with ~900 nodes: error. ~200 nodes: error. ~130 nodes: sometimes error. ~80 nodes: working.
SSL has troubles with large messages?
New information again again: I tried var_dumping the result of the fread call in the library. When the error occurs, the result is an empty string (''). From what I read in the doc, fread returns false on failure, and empty string on timeout. It would make sense with the "New information again" bloc, in which we discovered that large messages are causing the problem.
I tried stream_set_timeout() with 60 seconds, I tried to send a heartbeat manually, I tried setting it with the lib, I tried changing the timeouts with the lib and I tried increasing the maxReadBytes. Nothing worked so far. Still the same behaviour.
Fixed by reverting stomp-php library to version 4.3.1 :(

XML Parsing Error: xml processing instruction not at start of external entity

I have a vBulletin 3.8 forum.
When we click Edit button of any post (so the Quick Edit form should displayed), I get this error on the browser's console:
XML Parsing Error: xml processing instruction not at start of external entity
Location: http://www.xxxxx.xx/ajax.php?do=quickedit&p=438
Row number 2, Column 1:
... the Quick Edit form is not appearing the the progress bar displayed permanently.
I have try to disable hooks/ plugins, but the problem still appears.
I have this row on config.php: ini_set("display_errors", false); so I don't think it is a fatal error/ warning by PHP which brokes the xml normal syntax.
I have informed that this appear starts after the move of the site to another server. Does it say something to you?
Any general idea about this error?
EDIT:
Well, I found the reason of this issue, but I don't know how to fix it. Exact the same site on a localhost testing board works perfectly, but on the live server ANY html page/ ajax call etc, has a useless empty line as line #1.
For normal html pages, there is no reason for the browser to return an error, but when we're talking about an ajax call, this empty line at the top of the response, breaks the xml parsing from the browser. So it seems it is a server/ PHP/ Apache setting that applies this empty line. Any idea how to fix it? https://imgur.com/a/4neb0
It might be late for you but any new comers with php/nginx/apache can get an understanding of why.
Answer is simple: When moving the code, you might not be using git/rsync/scp but let me guess, you used zip (and probably Windows/Linux involved).
How to discover it was a two-day journey with many things tried:
We have the same error, we were also moving our servers. We tried:
We thought the server software version was a problem.
We thought the cloud provider OS image was a problem.
We used docker to avoid these problems, but the empty line problem persists.
We thought the code ?> ending was a problem, I went through all of them. But it wasn't.
I finally asked my colleague: How did you get the code? From Git? He said he downloaded from ZIP and then uploaded to server.
I removed code on the server (which extracted from a zip) and used git to download a fresh copy from our github.
Magic, problem solved. The empty line gone.
So I think the problem is with the zipping progress might have changed some file empty lines. Always use git.

Laravel SQL Chunk gives -902: Error reading data from the connection

I'm currently querying a huge Firebird (v2.5) table (with millions of rows) in order to perform some row-level operations. To achieve that, the code is using chunking from Laravel 5.1, somewhat like this:
DB::connection('USER_DB')
->table($table->name)
->chunk(min(5000, floor(65500/count($table->fields))), function($data) {
// running code and saving
});
For some reason, I keep receiving the following error:
SQLSTATE[HY000]: General error: -902 Error reading data from the connection.
I've already tried changing chunk size, and different codes, but the error still appears. Sometime it happens at the beginning of the table, and sometimes after parsing several hundred-thousands or even millions rows. The thing is that I need to parse only the rows in this transaction (so I can't stop and reopen the script).
Tested for memory on the server (running on different place than the database), and it is not using nearly anything of it.
While writing this, I rechecked the Firebird log and found the following entry:
INET/inet_error: read errno = 10054
As far as I could find, this isn't actually a Firebird problem, but a winsock reset error, is that correct? If so, how could I prevent this from happening during the chunk query? And how can I check if that is a problem with windows or the firewall?
Update I
Digging on the firebird2.5.log on the PHP server, found this errors:
INET/inet_error: send errno = 104
REMOTE INTERFACE/gds__detach: Unsuccesful detach from database.
I have found the root of my problem. The thing is that the server was resetting the connection. In order to avoid that, I added a "heartbeat" query to run every few minutes. With this strategy I was able to prevent the connection from being reset.

PHP task terminated but no error condition raised

A PHP script is using the ZipArchive class and is potentially long running.
Since it is dying silently but writing a partial zip file, I wrapped error_log() statements around the $zip->close(). (ini_set sets error logging to a file and E_ALL just before this code)
error_log("calling zip->close()");
$rc = $zip->close();
error_log("zip->close() returned $rc");
The logging file shows the first error_log, but never the second.
A unix top command shows the process running for a total CPU time of c. 2.5 minutes before it goes defunct.
I have also tried trapping the error with set_error_handler(), and the handler using error_log() to record the catch. But nothing shows up in the log file.
I'm assuming the process is being bounced, maybe by Apache (I have no control over Apache or PHP).
My question is: why can't I see this error in the file being used by error_log?
Thanks for the suggestions for circumventing the problem, but my question remains:
Why can't I see this error in the log? Or why can't I catch this error via set_error_handler()?
set_time_limit(0);
0 = Indefinitely

Multiple mysqli connection problems

I am having some strange issues with mysqli connections.
I was working on a page with mysqli, and it has been working fine all day. I then made a copy of this page, and stripped it down to debug a problem, and tested it as a different file. It worked fine connection wise. Upon trying to request the original file I was working on, I get the error:
Access denied for user 'user'#'localhost' (using password: YES)
I don't understand why. I have closed the connections after I have finished using them each time, although I don't see why that would be an issue. Interestingly, an older version of the file works fine, despite containing the exact same connection details and code.
What is going on?
Turn the question around. Rather than saying the two versions (the one that works and the one that doesn't) are identical in the aspects that matter, focus on the ways in which they are different and try to isolate which difference(s) also matter.
Make an additional copy of the working version. Verify that it works. Try making it into a copy of the non-working version by applying as many of the changes as you can, one by one, to this test copy, until you have something that is as close as possible to the broken version but that still works. Compare these two, and that should show you where the problem is.
Weird. If you are testing files from the same machine they should be working (if they have same code).
Check again username & password, i.e. spaces or strange chars.
Just for the sake of it, run a diff between the working copy and older version of the file. Check for any issues like: moved brackets, variable name changes, etc. Maybe the part of the code that defines the username and password never gets run!
If you need a free program for that, check WinDiff
If you put:
error_reporting(E_ALL);
as the first line in your code do you get any errors on either page? There may be something strange like the program can't open an include file anymore.
You said copying File A to File B means File A doesn't work. What happens if you copy File A to File B, delete File A then copy File B to File A?
Ensure that your IP is added to the MySQL allowed connections list, also ensure your password is correct. Try providing a full hostname rather than localhost if possible.
Post your code if un-successful.

Categories