MongoDB PHP header data timeout - php

My development team is having trouble accessing a remote MongoDB database from their local development environments.
The remote Ubuntu development server is running the newest v2.4.3 of MongoDB and PHP 5.3 with the mongo-php-driver v1.3.7 built for PHP 5.3. mongodb.conf is nearly empty except for basic path setup. There are currently no shards or replica sets.
All team members are using OSX 10.8, PHP 5.3, and have the mongo-php-driver v1.3.7 built for PHP 5.3. Some team members use XAMPP, others are using the built-in OSX AMP stack. We test on all major desktop browsers.
Whenever a page needs to grab data from Mongo, we start by calling this connection function:
public static function connect($server, $db)
{
$connection = new MongoClient(
"mongodb://{$server}:27017",
array(
"connectTimeoutMS" => 20000,
"socketTimeoutMS" => 20000
)
);
return $connection->$db;
}
However, nearly 30% of page loads are experiencing the following error:
Failed to connect to: www.development-server.com:27017: send_package: error reading from socket: Timed out waiting for header data
It seems that a large portion of those errors occur when refreshing a page, rather than navigating to a new page, but that's more of a guess than a fact. I've checked everyone's php.ini file and confirmed that default_socket_timeout = 60 is set.
The development server also hosts a copy of the site, but has never thrown the error, presumably since it's only calling localhost to get there. When I installed MongoDB locally, the errors also went away.
This really appears to be a timeout issue, but I cannot find any further settings, parameters, or configurations to adjust the expiry period. Are there any?

The response from #hernan_arg got me thinking about another possibility. Instead of relying on the one-and-only connection attempt to succeed (which seems to take forever), is it acceptable to stick the connection in a loop until it succeeds?
public static function connect($server, $db)
{
$connection = null;
try {
$connection = new MongoClient("mongodb://{$server}");
} catch (MongoConnectionException $e) {
return self::connect();
exit;
}
return $connection->$db;
}
Logging indicates that when the connection does fail, it fails quickly and the loop will establish a new connection in a much more timely manner than the infinite timeout does. Supposing the database becomes unreachable I'm assuming I can rely on PHP execution timeout to eventually kill the process.

try connect without the port in connection or set
array(
"connectTimeoutMS" => -1,
"socketTimeoutMS" => -1
)
(infinite timeout)

The 1.4.1 release of the driver addresses some stability issues over unstable networks.
Assuming you are talking to a replicaset, the driver will discard of servers that are being unreasonably slow - rather then reattempt to connect to them the driver will now blacklist it for few seconds without throwing these exceptions upon connection (assuming we can connect to atleast one server)

Related

PDO returns "SQLSTATE [HY000]: General Error: 10 Disk I/O error" on sqlite database load

A bit about my current setup, The website is hosted on a windows server 2012 IIS with PHP 7.1, the site runs using Laravel with Medoo to load the database, however on any form of SQL statement the code fails with the error mentioned in the title.
Currently the file permissions are on Full Control for debug purposes
The code used for the call is the following
$pdo = new \Medoo\Medoo([
'database_type' => 'sqlite',
'database_file' => $config->database_path
]);
$query = $pdo->select('computers', '*');
I have tried making a SQL dump of the SQLite database file using a program called "DB Browser for SQLite" and the making a new database file from scatch then importing the SQL dump.
When this is done the program works without problem, so my current thought is there might be a limit on PDO SQLite file limit size since the original file is about 2.8MB while the new is 1.9MB
Is there anyone who have encounter a similar problem? or have any advice on how to fix the current problem?

PHP Composer using define

I am using phpseclib to do some SFTP stuff. At the moment I can't login, and I am trying to find out why. In the docs, it states I should do something like this
set_include_path(get_include_path() . PATH_SEPARATOR . 'phpseclib');
include 'Net/SFTP.php';
define('NET_SFTP_LOGGING', NET_SFTP_LOG_COMPLEX);
$sftp = new NET_SFTP('***');
if(!$sftp->login('***', '***')) {
print_r($sftp->getSFTPErrors());
}
So, I need to define the type of logging in order to print out the SFTPErrors.
I am doing things differently than the above because I am using composer. So as usual, I load autoload.php. Instead of include, I make use of use e.g.
use phpseclib\Net\SFTP;
I then proceed to do something like the following
define('NET_SFTP_LOGGING', NET_SFTP_LOG_COMPLEX);
$sftp = new SFTP($config::SFTP_SERVER);
if(!$sftp->login($config::SFTP_USER, $config::SFTP_PASSWORD)) {
print_r($sftp->getSFTPErrors());
exit('Login Failed');
}
If I do this however, I get the output
Notice: Use of undefined constant NET_SFTP_LOG_COMPLEX - assumed 'NET_SFTP_LOG_COMPLEX' in ...
Array
(
)
Login Failed
So it appears that with composer, I cant define a constant in the same way, and the print out of the errors produces an empty array.
So, how can I define this constant in my composer project?
Thanks
A few of things.
If you're using the namespaced version of PHP (as evidenced by your use phpseclib\Net\SFTP;) then you're using the 2.0 branch. The documentation on the website is for the 1.0 branch. For the 2.0 branch you need to do as FĂ©lix Saparelli suggested - SSH2::LOG_COMPLEX.
That said, logging isn't going to show SFTP errors. Logging shows you the raw packets. Here's an example of what the logs produce:
http://phpseclib.sourceforge.net/ssh/log.txt
You get these logs by doing $ssh->getLogs().
For the errors you don't need to enable anything - it places any errors it receives from the server into an array that it's returning to you. phpseclib does this automatically and this behavior cannot be disabled.
Also, $sftp->getSFTPErrors() is great for SFTP errors but at the login process you might be getting SSH errors and not SFTP errors. You'd get SSH errors by doing $sftp->getErrors(). The thing is... SFTP operates in a higher layer than SSH. SSH won't succeed if TCP/IP can't make a connection and SFTP won't succeed if SSH can't make a connection. So per that you ought to be checking all the layers.
Finally, it's quite possible the failure is happening for reasons for which errors would not be returned. Maybe the server requires compression, which phpseclib doesn't support. eg.
http://www.frostjedi.com/phpbb3/viewtopic.php?p=221481#p221481
I also don't know if you'd get an error if the key or password you were using was simply invalid.
Really, there could be any number of causes for an inability to connect. You could be using selinux which, by default, disables outbound connections from PHP scripts running on port 80, there could be a routing issue, etc (all of these affect fsockopen, but, in all, there are just a lot of potential causes for failure).
Overall, I'd say you're on the right track with the logs. But do $sftp->getLog() instead of $sftp->getSFTPErrors() and then include the logs in your post.

Error using SoapClient in PHP and AMPPS on OS X

I have installed AMPPS on OS X Mavericks and I am getting the error below when I try to access a WSDL over HTTPS in PHP by using the SoapClient class.
SOAP-ERROR: Parsing WSDL: Couldn't load from
'https://www.some-domain.com/Webservice.asmx?WSDL' : failed to load
external entity
"https://www.some-domain.com/secure/api/Webservice.asmx?WSDL"
OpenSSL, SOAP and cURL is enabled in when I run phpinfo(). I can retrieve the WSDL contents just fine by using file_get_contents(). There is no firewall blocking the connection. Other computers on the same network can connect just fine. Below is my code. It works on the production server as well as several other computers that I have used it on.
$this->wsdlUrl = 'https://www.some-domain.com/Webservice.asmx?WSDL';
$this->client = new SoapClient($this->wsdlUrl);
$this->client->Connect(array(
/* Login credentials */
));
I have also tried to use a stream context with verify_peer set to false, but with the same result. Increasing the default_socket_timeout option also does not work as some people have suggested.
I understand that there are several other questions concerning this error, but none of the proposed solutions that I could find work for me.
Any ideas? Thanks in advance!
I'm sure this will point you to the right direction:
try {
$sc= new SoapClient('https://www.some-domain.com/Webservice.asmx?WSDL');
$sc->Connect(/* Login credentials */);
} catch (SoapFault $fault) {
print_r($fault->faultstring);
}

Predis is giving 'Error while reading line from server'

I am using predis, it's subscribed to a channel and listening. It throws the following error (below) and dies after 60 secs exactly. It's surely not my web servers error or its timeout.
There is a similar issue being discussed here. Could not get much of it.
I tried setting connection_timeout in predis conf file to 0, but doesn't helps much.
Also if i keep using (send data to it and it processes) the worker it doesn't give any error. So its likely a timeout somewhere, and that too in connection.
Here is my code snippet, which is likely producing error, because if data is given to worker it runs this code and go forward, which produces no error after that.
$pubsub = $redis->pubSub();
$pubsub->subscribe($channel1);
foreach ($pubsub as $message) { //doing stuff here and unsubscribing from channel
}
Trace
PHP Fatal error: Uncaught exception 'Predis\Network\ConnectionException' with message 'Error while reading line from the server' in Predis/Network/ConnectionBase.php:159 Stack trace:
#0 library/vendor/predis/lib/Predis/Network/StreamConnection.php(195): Predis\Network\ConnectionBase->onConnectionError('Error while rea...')
#1 library/vendor/predis/lib/Predis/PubSub/PubSubContext.php(259): Predis\Network\StreamConnection->read()
#2 library/vendor/predis/lib/Predis/PubSub/PubSubContext.php(206): Predis\PubSub\PubSubContext->getValue()
#3 pdf/file.php(16): Predis\PubSub\PubSubContext->current()
#4 {main} thrown in Predis/Network/ConnectionBase.php on line 159
Checked the redis.conf timeout too, its also disabled.
Just set the read_write_timeout connection parameter to 0 or -1 to fix this. e.g.
$redis = new Predis\Client('tcp://10.0.0.1:6379'."?read_write_timeout=0");
Setting connection parameters is documented in the README. The author of Redis noted the relevance of the read_write_timeout parameter to this error in an issue on GitHub, in which he notes that:
If you are using Predis in a daemon-like script you should set read_write_timeout to -1 if you want to completely disable the timeout (this value works with older and newer versions of Predis). Also, remember that you must disable the default timeout of Redis by setting timeout = 0 in redis.conf or Redis will drop the connection of idle clients after 300 seconds of inactivity.
I had similar problem, better solution to the situation is not setting the timeout to 0 but using a exponential backoff and set the upper and the lower limit.
Change in the config parameter connection_timeout to 0 will also solve the issue.
I got the resolution to the problem. So, there is a limit to ports that a application server can connect to a particular application on another machine. These ports were getting exhausted.
We increased the limit and the problem got resolved.
How we got to know about this problem ?
In php, we were getting "Cannot assign requested address" error while creating a socket (error code 99).
At /etc/redis/redis.conf , set
timeout = 0
I'm using Heroku and solved this problem with switching from Redis Heroku to Redis Enterprise addon and then:
use Predis\Client as PredisClient;
To solve collision with GuzzleHttp\Client. You can leave the
as PredisClient
line if you are not usng GuzzleHttp.
And then connection:
$redisClient = new PredisClient(array(
'host' => parse_url(env('REDIS_URL'), PHP_URL_HOST),
'port' => parse_url(env('REDIS_URL'), PHP_URL_PORT),
'password' => parse_url(env('REDIS_URL'), PHP_URL_PASS)
)
);
(You can find your 'REDIS_URL' automatically prefilled in Heroku config vars).

SoapClient: faultcode WSDL

When I try to use SoapClient:
try {
$client = new SoapClient('http://someurl/somefile.wsdl');
} catch (SoapFault $e) {
var_dump($e);
}
I have catch error with:
["faultstring"] => "SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://someurl/somefile.wsdl' : failed to load external entity "http://someurl/somefile.wsdl"
["faultcode"] => "WSDL"
I can manually download http://someurl/somefile.wsdl and can file_get_contents for this file. I try to use it before on different computer and it worked. Possible problem with php or apache settings..
ArchLinux with last updates for php and apache. I tried to enable all php extensions.
Were you able to get wsdl using file_get_contents() in browser?
I had similar issue recently in Archlinux with same faultstring, no matter what wsdl file was used. The same code was working without any problem on other Archlinux machine and Windows XP box.
After some research, it came out that problem arose only when I tried to access the page in browser - script accessed from command line worked as expected. Then I changed the script to download the wsdl file directly, using aforementioned file_get_contents() - it gave me a warning "php_network_getaddresses: getaddrinfo failed: Name or service not known".
Few tutorials (on SO, or this one: http://albertech.net/2011/05/fix-php_network_getaddresses-getaddrinfo-failed-name-or-service-not-known/ ) later I haven't fought off the problem yet. But then I discovered what introduced the issues: I had been running NetworkManager since the installation of Arch (to better handle wireless), and few weeks later I've added mysqld and httpd as last to DAEMONS section in rc.conf - it seems this broke DNS resolution for apache.
Having two solutions (go back to starting servers manually or try other network manager) I've switched to wicd and haven't run into the issue again.

Categories