XMPP (XMPPHP) session won't start - php

Fellows I'm working in a new server and, at first, it looks all good. The eJabberd webadmin runs OK and I was able to even create an user by that interface.
The situation is, the same application that usually ran on my previous server freezes at the waiting for the session to start, the code:
$this->lnk->processUntil('session_start');
The $this->lnk->connect(); works fine but it seems that the session can't be set. Any suggestions for where or what I should go take a look first?
ACKs:
The XMPP application has been set the same way than was in the older server.
Here is the whole code:
$this->lnk = new XMPPHP_XMPP($this->config['host'],
$this->config['port'],
$this->config['username'],
$this->config['password'],
$this->config['service'],
$this->config['domain'],
$printlog = false,
$loglevel = XMPPHP_Log::LEVEL_VERBOSE);
$this->lnk->useEncryption(true);
$this->lnk->connect();
$this->lnk->processUntil('session_start');

The issue was caused by $this->lnk->useEncryption(true);. Since my new server hadn't proper SSL/TLS settings, this line caused the freezing of the code.
Possible solvings are disabling encryption and adjusting you SSL/TLS credentials.

Related

Handling expiring Let's Encyrpt SSL in a websocket server with Ratchet PHP

After reading practically every article on the subject of getting the Ratchet PHP implementation of Websockets to work with SSL, I finally got it working with the help of many tips, although mainly the info here https://github.com/ratchetphp/Ratchet/issues/489 which basically modifies the way you create the IoServer to use React's Secure Server. My implementation at the server end in PHP like so:
$loop = \React\EventLoop\Factory::create();
$secure_websockets = new \React\Socket\Server('0.0.0.0:8080', $loop);
$secure_websockets = new \React\Socket\SecureServer($secure_websockets, $loop, [
'local_cert' => '/path/to/letsencyrpt/cert.crt',
'local_pk' => '/path/to/letsencyrpt/key.key',
'verify_peer' => false
]);
$app = new \Ratchet\Http\HttpServer(
new \Ratchet\WebSocket\WsServer(
new Chat()
)
);
$server = new \Ratchet\Server\IoServer($app, $secure_websockets, $loop);
$server->run();
And on the client side the Javascript:
function startWebsocket() {
var conn = new WebSocket('wss://domain.com:port');
conn.onopen = function(e) {
console.log("Connection established!");
};
}
startWebsocket();
And that works just fine.
BUT... I'm using Let's Encrypt certificates on the server and they have a rather short lifetime. That's fine on https as the host (Siteground) automatically renews the SSL before expiry, but if I'm putting a path to a specific Cert/Key pair (as above), they will expire.
So, are my options:
A: Put a repeating note in my calendar to remind me to update the Cert/Key paths every 3 months and live with it, or..
B: Is there a way to discover the current Cert/Key pair and drop that in as a variable?
Some things to bear in mind:
I'm on Siteground with a Cloud Server, but not with root access (I stupidly signed up for 'Geeky Features' years ago before my techy knowledge improved enough for me to need root).
Whilst I don't have root, it is effectively a dedicated server, so I can ask Siteground's support guys to change things for me - but I'd really need to be very specific with any requests. Trying things out is probably not an option here.
I explored the other method of getting wss: working with Ratchet, namely Proxying the https request over to the ws port. Didn't get very far with that since the version of Apache installed doesn't seem to be recent enough to support the tunnel and although there are references to NGINX running in parallel to Apache on Siteground, I just kept getting error any time I tried to use it to proxy. Didn't help that without root access I'm pretty limited in what I can do.

ftp_rawlist always fails on FTPES server with passive mode

I have to connect to a FTPES server to retrieve data. Connecting and logging in works just fine, but my call with ftp_rawlist always fails and returns "false".
I am using this code for debugging purposes:
$ftp = ftp_ssl_connect($ftp_host);
if (ftp_login($ftp, $ftp_user, $ftp_pass)) {
$p = ftp_pasv($ftp, true);
var_dump($p);
$r = ftp_rawlist($ftp, '/', true);
var_dump($r);
} else {
echo 'Could not login';
}
$p is always true, $r always false.
When I connect to the server through Filezilla everything works fine and I can list directory content and more.
Update #1: Tried to not only list '/' but various subfolders on the server, they all fail through the script.
Update #2: Also tried to use ftp_raw with the commands to get a list, but the LIST command runs for some time and then does not return any result at all. But HELP lists LIST as a valid command for the server... Strange...
Update #3: I tried phpseclib now, but while I can connect, I can't login with the user/password combination. Support from the maintainer of the FTPES server is not happening ("works fine for $somebody else..."), so I need to figure this out another way... :-)
To come to an end with this: As the deadline for this project came closer a solution had to be found. And although this is no real answer in the sense of a question, I'd like to show what I have done to have this fixed. Maybe someone stumbles upon this through googling.
Next to the things mentioned in the OP, I also tried connecting to FTPS using PHP and certificate as auth which didn't work either. As nothing works as it is supposed to, I wonder if the FTPS server is really configured correctly after all.
The people who run the server told me that everything is fine and their CLI CURL-call works fine for them, so they have no need to further investigate issues.
As a result of this I set up a sandbox account on a server which has shell_exec() enabled. There is now a script running which gets a file listing via CURL and then downloads the files via CURL with the commands provided by the server provider. That server can be accessed through normal SFTP and therefore acts as a "Proxy FTP" which regularly mirrors the remote FTPS server file structure.
Although I find this "solution" quite "hacky" it seems to run robust, stable and fast for the moment. We will therefore be able to have the operation running this way in this year (it only runs around three months before christmas) and will have a look into it in the new year and develop a more stable solution.
Maybe the server guys are also less stressed then and willing to help... ;-)
Add the following call to ftp_set_option() in a line before the call to ftp_pasv
ftp_set_option($ftp, FTP_USEPASVADDRESS, false);
ftp_pasv($ftp, true);

How to connect to ESL from a remote server?

I would like to make a web interface in PHP to see the FreeSWITCH activities (calls, etc), possibly hosted on a different server than the one where FS is running.
I've seen the server status on the FS server using command line (php single_command.php status), but now I would like to see this status from another server.
When I try to copy ESL.php file to this remote server and try to check the status, I get this error message:
Fatal error: Call to undefined function new_ESLconnection() in
/var/www/freeswitch/ESL.php on line 127
This is my index.php file:
<?php
ini_set('display_errors', 1);
$password = "ClueCon";
$port = "8021";
$host = "192.168.2.12";
require_once('ESL.php');
set_time_limit(0); // Remove the PHP time limit of 30 seconds for completion due to loop watching events
// Connect to FreeSWITCH
$sock = new ESLconnection($host, $port, $password);
// We want all Events (probably will want to change this depending on your needs)
$sock->sendRecv("status");
// Grab Events until process is killed
while($sock->connected()){
$event = $sock->recvEvent();
print_r($event->serialize());
}
?>
I undestand that the webserver doesn't have FreeSWITCH installed, so the error message is obvious, but i don't see how to access to this information from this webserver.
Thank you for your help.
Depending upon your need you can use either Inbound or Outbound socket. I do not know much about PHP and FS Event Socket but yeah tried enough with python. I highly recommended to go through thislink.
So if you just want to do small task like initiating a call, bridging any two given number etc i think you should go with Inbound socket(making cli command from your web server to freeswitch server) or mod_xml_rpc.
And if you want to have full control of everything that happens on FS server like showing live call status and modifying their states or say a complete interactive telephony dashboard then you should go with Outbound socket.(Your FS server will send all events to your web server.)
However in your case problem is I think you did not properly build the php ESL module.
this link might help you installing ESL
Rather than using ESL, you might want to consider using the XMLRPC. The connection is very straight forward:
https://wiki.freeswitch.org/wiki/Freeswitch_XML-RPC
The credentials for the XMLRPC are in your autoloads_configs/xml_rpc.conf.xml

Can't access Session variables on different servers

I have dedicated a server to maintain Memcached and store sessions, so that all my servers can work on the same session without difficulties.
But somehow I think I may have misunderstood the meaning of Memcached possibilities about PHP sessions.
I thought that I would be able to stand on Apache 1 a.domain.com and create a session e.g. $_SESSION['test'] = "This string is saved in the session" and then go to Apache 2 b.domain.com or c.domain.com and simply continue the session and type echo $_SESSION['test']; and it would output the string.
It doesn't, but i am sure that I was told that memcached would be a great tool if you have multiple webservers to share the same session.
What have I done wrong?
By the way. We seriously need a fully detailed tutorial or ebook to describe how to set up the server, using php, building clusters etc. based on Memcached.
In my php.ini file it says:
session.save_path = "192.168.100.228:11211"
Tutorials told me not to define a protocol, and the ip address has been given to the Apache 3 - memcached Server
Here is an image of phpinfo()
The domain in session.cookie_domain is not called domain but it is a .local.
It has been changed for this image.
EDIT:
Just for information. When I am using a simple Memcached based PHP command - everything works perfectly. But somehow when I am trying to save a session, the memcached server doesn't store the item.
This works:
<?php
$m = new Memcached();
$m->addServer('192.168.100.228', 11211);
$m->set('int', 99);
$m->set('string', 'a simple string');
$m->set('array', array(11, 12));
/* expire 'object' key in 5 minutes */
$m->set('object', new stdclass, time() + 300);
var_dump($m->get('int'));
var_dump($m->get('string'));
var_dump($m->get('array'));
var_dump($m->get('object'));
?>
This doesn't work
<?php
session_start();
$_SESSION['name'] = "This is a simple string.";
?>
EDIT 2: THE SOLUTION
I noticed that after deleting the cache history including cookies etc. the browser didn't finish the job. The problem continued due to the fact, that it hang on to the original individual session id, which made each subdomain separated from each other.
Everything defined here is correct, just make sure your browser resets its cookies when you ask it to. >.<
By default (session) cookies are domain specific, so set the cookie domain in your php.ini
session.cookie_domain = ".domain.com"
Also see here
Allow php sessions to carry over to subdomains
Make sure to restart your webserver and clear all of your browser cookies after making the change. Your browser could get confused if you have cookies with the same name but different subdomains.
Other things to check:
That the sessions work fine on each individual server.
Make sure the session handler is set properly by using phpinfo() if you are working with a large codebase especially inherited / 3rd party stuff there may be something overriding it.
If you are using 3rd party code - like phpbb for instance - check that the cookie settings are correct in there too.
(please note this answer tidied to remove brainstorming, kept all relevant info)

session_start hangs

since a few hours our server hangs every time you do a session_start.
For testing purposes i created a script which looks like this:
<?php
session_start();
?>
Calling it from the console hangs and it can't even be stopped with ctrl-c, only kill -9 works. The same for calling it via Apache. /var/lib/php/session/ stays empty but permissions are absolutely fine, www can write and also has read permissions for all parent folders.
According to the admins there were no changes made on the server and there is no special code registered for sessions. The Server is CentOS 4 or 5 and yesterday everything was working perfectly. We rebooted the server and updated PHP, but nothing changed.
I've ran out of ideas, any suggestions?
UPDATE
We solved this problem by moving the project to another server, so while the problem still exists on one server there is no immediate need for a solution anymore.
I will keep the question open in case someone has an idea for others having a similar problem in the future, though.
There are many reasons for that, here are a few of them:
A. The session file could be opened exclusively.
When the file lock is not released properly for whatever reason, it is causing session_start() to hang infinitely on any future script executions.
Workaround: use session_set_save_handler() and make sure the write function uses fopen($file, 'w') instead of fopen($file, 'x')
B. Never use the following in your php.ini file (entropie file to "/dev/random"), this will cause your session_start() to hang:
<?php
ini_set("session.entropy_file", "/dev/random");
ini_set("session.entropy_length", "512");
?>
C.
session_start() needs a directory to write to.
You can get Apache plus PHP running in a normal user account. Apache will then of course have to listen to an other port than 80 (for instance, 8080).
Be sure to do the following things:
- create a temporary directory PREFIX/tmp
- put php.ini in PREFIX/lib
- edit php.ini and set session.save_path to the directory you just created
Otherwise, your scripts will seem to 'hang' on session_start().
If this helps:
In my scenario, session_start() was hanging at the same time I was using the XDebug debugger within PHPStorm, the IDE, on Windows. I found that there was a clear cause: Whenever I killed the debug session from within PHPStorm, the next time I tried to run a debug session, session_start() would hang.
The solution, if this is your scenario, is to make sure to restart Apache every time you kill an XDebug session within your IDE.
I had a weird issue with this myself.
I am using CentOS 5.5x64, PHP 5.2.10-1. A clean ANSI file in the root with nothing other than session_start() was hanging. The session was being written to disk and no errors were being thrown. It just hung.
I tried everything suggested by Thariama, and checked PHP compile settings etc.
My Fix:
yum reinstall php; /etc/init.d/httpd restart
Hope this helps someone.
To everyone complaining about the 30 seconds of downtime being unacceptable, this was an inexplicable issue on a brand new, clean OS install, NOT a running production machine. This solution should NOT be used in a production environment.
Ok I face the same problem on 2 PC, 1 is MAC mini XAMPP, 1 is Windows 10 Xampp.
Both is php spent infinity to run session_start(). Both PHP version is 7.x.x
I found that session files is lock to read and write. So that I added code to make PHP read session files and immediately unlock when done with
<?php
session_start([
'read_and_close' => true,
]);
?>
or
<?php
//For PHP 5.x
session_start();
session_write_close();
?>
After this PHP unlock session file => Problems solve
The problem: -
Iv experienced (and fixed) the problem where file based sessions hang the request, and database based sessions get out of sync by storing out of date session data (like storing each session save in the wrong order).
This is caused by any subsequent request that loads a session (simultaneous requests), like ajax, video embed where the video file is delivered via php script, dynamic resource file (like script or css) delivered via php script, etc.
In file based sessions file locking prevents session writing thus causing a deadlock between the simultaneous request threads.
In database based session the last request thread to complete becomes the most recent save, so for example a video delivery script will complete long after the page request and overwrite the since updated session with old session data.
The fix: -
If your ajax or resource delivery script doesnt need to use sessions then easiest to just remove session usage from it.
Otherwise you'd best make yourself a coffee and do the following: -
Write or employ a session handler (if not already doing so) as per http://www.php.net//manual/en/class.sessionhandler.php (many other examples available via google search).
In your session handler function write() prepend the code ...
// processes may declare their session as read only ...
if(!empty($_SESSION['no_session_write'])) {
unset($_SESSION['no_session_write']);
return true;
}
In your ajax or resource delivery php script add the code (after the session is started) ...
$_SESSION['no_session_write'] = true;
I realise this seems like a lot of stuffing around for what should be a tiny fix, but unfortunately if you need to have simultaneous requests each loading a session then it is required.
NOTE if your ajax or resource delivery script does actually need to write/save data, then you need to do it somewhere other than in the session, like database.
Just put session_write_close(); befor Session_start();
as below:
<?php
session_write_close();
session_start();
.....
?>
I don't know why, but changing this value in /etc/php/7.4/apache2/php.ini worked for me:
;session.save_path = "/var/lib/php/sessions"
session.save_path = "/tmp"
To throw another answer into the mix for those going bananas, I had a session_start() dying only in particular cases and scripts. The reason my session was dying was ultimately because I was storing a lot of data in them after a particularly intensive script, and ultimately the call to session_start() was exhausting the 'memory_limit' setting in php.ini.
After increasing 'memory_limit', those session_start() calls no longer killed my script.
For me, the problem seemed to originate from SeLinux. The needed command was chcon -R -t httpd_sys_content_t [www directory] to give access to the right directory.
See https://askubuntu.com/questions/451922/apache-access-denied-because-search-permissions-are-missing
If you use pgAdmin 4 this can happen as well.
If you have File > Preferences > SQL Editor > Options > "Auto Commit" disabled, and you just ran a query using the query tool but didn't manually commit, then session_start() will freeze.
Enable auto commit, or manually commit, or just close pgAdmin, and it will no longer freeze.
In my case it seems like it was the NFS Share that was locking the session , after restarting the NFS server and only enabled 1 node of web clients the sessions worked normally .
Yet another few cents that might help someone. In my case I was storing in $_SESSION complex data with several different class objects in them and session_start() couldn't handle the whole unserialization as not every class was loaded on session_start. The solution is my case was to serialize/jsonify data before saving it into the $_SESSION and reversing the process after I got the data out of session.

Categories