My PHP application dies with fastcgi: unexpected end-of-file - php

I updated to PHP 7 at my localhost, but since then anytime i want to redirect from one page to another in my nette application, I'll receive error: 500 - Internal Server Error.
I was searching through stack overflow and found a problem that is quite similar to mine here: How to solve "mod_fastcgi.c.2566 unexpected end-of-file (perhaps the fastcgi process died)" when calling .php that takes long time to execute? . However, I don't work with large files and my connection dies immediately.
My /var/log/lighttpd/error.log
2016-03-06 10:54:11: (server.c.1456) [note] graceful shutdown started
2016-03-06 10:54:11: (server.c.1572) server stopped by UID = 0 PID = 351
2016-03-06 11:03:48: (log.c.194) server started
2016-03-06 11:07:17: (mod_fastcgi.c.2390) unexpected end-of-file (perhaps the fastcgi process died): pid: 21725 socket: unix:/run/lighttpd/php-fastcgi.sock-3
2016-03-06 11:07:17: (mod_fastcgi.c.3171) response not received, request sent: 1029 on socket: unix:/run/lighttpd/php-fastcgi.sock-3 for /~rost/lp/web/www/index.php?, closing connection
2016-03-06 11:09:01: (mod_fastcgi.c.2390) unexpected end-of-file (perhaps the fastcgi process died): pid: 21725 socket: unix:/run/lighttpd/php-fastcgi.sock-3
2016-03-06 11:09:01: (mod_fastcgi.c.3171) response not received, request sent: 1061 on socket: unix:/run/lighttpd/php-fastcgi.sock-3 for /~rost/lp/web/www/index.php?action=list&presenter=Campaign, closing connection
2016-03-06 11:09:06: (mod_fastcgi.c.2390) unexpected end-of-file (perhaps the fastcgi process died): pid: 21725 socket: unix:/run/lighttpd/php-fastcgi.sock-3
2016-03-06 11:09:06: (mod_fastcgi.c.3171) response not received, request sent: 942 on socket: unix:/run/lighttpd/php-fastcgi.sock-3 for /~rost/lp/web/www/index.php?, closing connection
2016-03-06 11:09:14: (mod_fastcgi.c.2390) unexpected end-of-file (perhaps the fastcgi process died): pid: 21725 socket: unix:/run/lighttpd/php-fastcgi.sock-3
2016-03-06 11:09:14: (mod_fastcgi.c.3171) response not received, request sent: 1051 on socket: unix:/run/lighttpd/php-fastcgi.sock-3 for /~rost/lp/web/www/index.php?action=out&presenter=Sign, closing connection
My /etc/lighttpd/lighttpd.conf
server.modules = ( "mod_userdir",
"mod_access",
"mod_accesslog",
"mod_fastcgi",
"mod_rewrite",
"mod_auth"
)
server.port = 80
server.username = "http"
server.groupname = "http"
server.document-root = "/srv/http"
server.errorlog = "/var/log/lighttpd/error.log"
dir-listing.activate = "enable"
index-file.names = ( "index.html" )
# Rewrite URL without dots to index.php
#url.rewrite-once = ( "/^[^.?]*$/" => "/index.php" )
mimetype.assign = ( ".html" => "text/html",
".htm" => "text/html",
".txt" => "text/plain",
".properties" => "text/plain",
".jpg" => "image/jpeg",
".png" => "image/png",
".svg" => "image/svg+xml",
".gif" => "image/gif",
".css" => "text/css",
".js" => "application/x-javascript",
"" => "application/octet-stream"
)
userdir.path = "public_html"
# Fast CGI
include "conf.d/fastcgi.conf"
My /etc/lighttpd/conf.d/fastcgi.conf
server.modules += ( "mod_fastcgi" )
#server.indexfiles += ( "index.php" ) #this is deprecated
index-file.names += ( "index.php" )
fastcgi.server = (
".php" => (
"localhost" => (
"bin-path" => "/usr/bin/php-cgi",
"socket" => "/run/lighttpd/php-fastcgi.sock",
"max-procs" => 4, # default value
"bin-environment" => (
"PHP_FCGI_CHILDREN" => "1", # default value
),
"broken-scriptfilename" => "enable"
))
)
Variables from /etc/php/php.ini
cat /etc/php/php.ini | grep max_execution_time
max_execution_time = 30
cat /etc/php/php.ini | grep default_socket_timeout
default_socket_timeout = 60
Update 7.3.2016
I switched from php fast cgi to php-fpm and interesting thing is that problem prevails, but is less often. Sometimes the redirect jump to 500 and sometimes not. And error log again:
2016-03-07 22:23:32: (mod_fastcgi.c.2390) unexpected end-of-file (perhaps the fastcgi process died): pid: 0 socket: unix:/run/php-fpm/php-fpm.sock
2016-03-07 22:23:32: (mod_fastcgi.c.3171) response not received, request sent: 1084 on socket: unix:/run/php-fpm/php-fpm.sock for /~rost/lp/web/www/index.php?action=out&presenter=Sign, closing connection

Also, try to delete Nette cache first.

I've finally found a solution. It is probably Nette / Cassandra related problem.
The error was appearing because of object Nette\Security\Identity, after I assigned user data into it:
public function authenticate(array $credentials) {
// Retrieve username and password
list($email, $passwd) = $credentials;
// Select user with given email from database
$usr = $this->daoManager->getDao("AppUser")->loadByEmail($email);
if ($usr == null || count($usr) == 0) {
$msg = 'The email is incorrect.';
$arg = self::IDENTITY_NOT_FOUND;
throw new Nette\Security\AuthenticationException($msg, $arg);
}
// TODO Check user password
// TODO Check verification
// Create identity - THE PROBLEM WAS HERE
return new Identity($email, $usr['role'], $usr);
}
It was caused by value 'registered' in $usr array which was of type Cassandra\Timestamp. Since then almost every redirect crashed with above mentioned error.
Following code fixed the issue:
return new Identity($email, $usr['role'], $this->fixUserArray($usr));
Where:
protected function fixUserArray(array $user) {
$result = array();
foreach ($user as $key => $val) {
if ($key === "registered") {
$result[$key] = $val->time();
} else {
$result[$key] = $val;
}
}
return $result;
}

Related

TCP LogStash PHP steam socket working in shell but not file

I have a a logstash tcp server running which accepts a tcp socket connection, I can write to that connection on the php shell.
If I use the same code in a file I get the following lines in logstash debug console and nothing goes into my elasticsearch instance
[DEBUG] 2020-07-15 10:31:40.987 [nioEventLoopGroup-2-3] jsonlines - config LogStash::Codecs::JSONLines/#charset = "UTF-8"
[DEBUG] 2020-07-15 10:31:40.987 [nioEventLoopGroup-2-3] jsonlines - config LogStash::Codecs::JSONLines/#id = "json_lines_dbda8bcd-69ed-4356-81af-381355f76e2f"
[DEBUG] 2020-07-15 10:31:40.987 [nioEventLoopGroup-2-3] jsonlines - config LogStash::Codecs::JSONLines/#enable_metric = true
[DEBUG] 2020-07-15 10:31:40.987 [nioEventLoopGroup-2-3] jsonlines - config LogStash::Codecs::JSONLines/#delimiter = "\n"
Logstash config
input {
tcp {
port => 5080
codec => "json"
id => "PHP_TCP_LOGS"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{[xx]}"
}
}
PHP File
$socket = stream_socket_client('tcp://localhost:5080', $errorNumber, $error, 30);
$a = ["foo"=>"barr"];
fwrite($socket, json_encode($a) . "\n");
echo $error;

Varnish POST cache not working though PHP CURL, however, it seems to be working with TERMINAL CURL

I referred this document to enable POST caching on Apache server. Service side scripting language used is PHP.
As mentioned I called the php file using
curl --data '{"maxresults":2000}' http://localhost/varnishoutput/varnishtest.php
OUTPUT
{"1":{"{\"maxresults\":2000}":""},"2":"NOO"}
The output was cached and any changes made in the "varnishtest.php". Din't change the response for the POST value "ABC". However, if the POST value was changed, the newly changed response was shown. Thus POST cache on Terminal end is working.
However, if I call the same URL from other PHP file using CURL. The output is not cached. Here is the PHP code that I used.
varnishpost.php
<?php
$data = array();
$data['maxresults'] = 2000;
$api = "http://localhost/varnishoutput/varnishtest.php";
$s = curl_init();
curl_setopt($s,CURLOPT_URL,$api);
curl_setopt($s,CURLOPT_RETURNTRANSFER,true);
curl_setopt($s,CURLOPT_POST,true);
curl_setopt($s,CURLOPT_POSTFIELDS,$data);
curl_setopt($s,CURLINFO_HEADER_OUT,$data);
$response = curl_exec($s);
$header = curl_getinfo($s);
curl_close($s);
$dataArr = json_decode($response);
print_r($dataArr);
echo "\n\n";
print_r($header);
echo "\n\n";
echo $api. "\n\n";
?>
varnishtest.php
<?php
echo json_encode(array("1"=>$_POST,"2"=>"NOO"));
?>
Output
stdClass Object
(
[1] => stdClass Object
(
[maxresults] => 2000
)
[2] => NOO
)
Header Output
Array
(
[url] => http://localhost/varnishoutput/varnishtest.php
[content_type] => text/html; charset=UTF-8
[http_code] => 200
[header_size] => 359
[request_size] => 207
[filetime] => -1
[ssl_verify_result] => 0
[redirect_count] => 0
[total_time] => 0.005651
[namelookup_time] => 0.004235
[connect_time] => 0.004385
[pretransfer_time] => 0.004474
[size_upload] => 149
[size_download] => 37
[speed_download] => 6547
[speed_upload] => 26367
[download_content_length] => 37
[upload_content_length] => 149
[starttransfer_time] => 0.00478
[redirect_time] => 0
[redirect_url] =>
[primary_ip] => ::1
[certinfo] => Array
(
)
[primary_port] => 80
[local_ip] => ::1
[local_port] => 57130
[request_header] => POST /varnishoutput/varnishtest.php HTTP/1.1
Host: localhost
Accept: */*
Content-Length: 149
Expect: 100-continue
Content-Type: multipart/form-data; boundary=----------------------------072b0f786662
)
The header output doesn't show any "X-Varnish" value in header info. Also, the Output changes every time the response file is changed for the same POST value. Thus post cache isn't working in this case.
THE VCL FILE code is as follow:
#
# This is an example VCL file for Varnish.
#
# It does not do anything by default, delegating control to the
# builtin VCL. The builtin VCL is called when there is no explicit
# return statement.
#
# See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/
# and http://varnish-cache.org/trac/wiki/VCLExamples for more examples.
# Marker to tell the VCL compiler that this VCL has been adapted to the
# new 4.0 format.
vcl 4.0;
import std;
import bodyaccess;
# Default backend definition. Set this to point to your content server.
backend default {
.host = "192.168.0.108";
.port = "8080";
.connect_timeout = 120s;
.first_byte_timeout = 120s;
.between_bytes_timeout = 120s;
}
sub vcl_recv {
unset req.http.X-Body-Len;
# Happens before we check if we have this in cache already.
#
# Typically you clean up the request here, removing cookies you don't need,
# rewriting the request, etc.
# Do not cache these paths.
if (req.url ~ "(/userInfo/|gm_internet_testing.php|/abc-wct/|/ren_api/|/book_api/|/eve_api/getEveScore.php|/iptoct_api/|/ct_api/.*\bweather\b|/rec_api/.*\bopennow\b)") {
return (pass);
}
# Replace the parameter &_= which is a random integer passed when jquery ajax cache is false
if (req.url ~ "&_=[0-9]+$") {
set req.url = regsub(req.url,"&_=[0-9]+$","");
}
if (req.method == "POST" ) {
std.log("Will cache POST for: " + req.http.host + req.url);
std.cache_req_body(500KB);
set req.http.X-Body-Len = bodyaccess.len_req_body();
if (req.http.X-Body-Len == "-1") {
return(synth(400, "The request body size exceeds the limit"));
}
return (hash);
}
# Handling to cache pages across browsers / devices
if (req.http.Accept-Encoding) {
if (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} else if (req.http.Accept-Encoding ~ "deflate") {
set req.http.Accept-Encoding = "deflate";
} else {
# unknown algorithm
unset req.http.Accept-Encoding;
}
}
if (req.http.user-agent ~ "MSIE") {
set req.http.user-agent = "MSIE";
} else {
set req.http.user-agent = "Mozilla";
}
unset req.http.Cookie;
}
sub vcl_hash {
# To cache POST and PUT requests
if (req.http.X-Body-Len) {
bodyaccess.hash_req_body();
} else {
hash_data("");
}
}
sub vcl_backend_fetch {
if (bereq.http.X-Body-Len) {
set bereq.method = "POST";
}
}
sub vcl_backend_response {
# Happens after we have read the response headers from the backend.
#
# Here you clean the response headers, removing silly Set-Cookie headers
# and other mistakes your backend does.
}
sub vcl_deliver {
# Happens when we have all the pieces we need, and are about to send the
# response to the client.
#
# You can do accounting or modifying the final object here.
}
Can anyone help ?
I notice a discrepancy between the POST data on the command line and the data in the PHP script.
The --data '{"maxresults":2000}' in your cURL request will actually send a JSON object as payload.
When I look at your PHP script, $data['maxresults'] = 2000; is just regular POST data, which is not the same.
Because you perform a bodyaccess.hash_req_body(); in vcl_hash, the lookup hash will be different if the POST body is different. That's why you'll get a miss in your script.
JSON payload
If you want the same result, I'd advise you to set the POST fields as follows:
curl_setopt($s,CURLOPT_POSTFIELDS,json_encode($data));
Regular POST fields
If you want to use regular POST fields instead, you can perform the following cURL call on the command line:
curl --data 'maxresults=2000' http://localhost/varnishoutput/varnishtest.php
And this is the change you need to make in your PHP file if you want to use pure POST fields:
curl_setopt($s,CURLOPT_POSTFIELDS,http_build_query($data));

How do I set a timeout for writing to a slow filesystem in PHP?

I want to write to a file from PHP, and set a timeout for that operation to 5 seconds.
I am writing a PHP script to check if a mounted NFS server is available for writing. I need an alert if it is unavailable. If the NFS server is disconnected, then different ways of trying to write to it will fail immediately, and I can then send an alert easily. No problem so far.
But when it is slow (perhaps due to bandwidth congestion or server load) a simple approach such as file_put_contents() can take several minutes, as it does not timeout, and eventually succeeds.
I have tried various combinations of file_get_contents(), fwrite(), stream_set_timeout(), ini_set('default_socket_timeout', 5) but with no success.
Is it possible to set a timeout for writing to a file? If so, how?
Here's a sample of what I have tried:
<?php
$f = '/path/to/file';
$test = 'some large string';
$fp = fopen($f , 'w', false, $ctx);
if (!$fp) {
echo "Unable to open\n";
} else {
stream_set_timeout($fp, 0, 1000);
fwrite($fp, $text);
$info = stream_get_meta_data($fp);
print_r($info);
fclose($fp);
if ($info['timed_out']) {
echo 'Connection timed out!';
}
}
/*
Outputs:
Array
(
[timed_out] =>
[blocked] => 1
[eof] =>
[wrapper_type] => plainfile
[stream_type] => STDIO
[mode] => w
[unread_bytes] => 0
[seekable] => 1
[uri] => /path/to/file
)
*/

Process signal handlers are not called

I'm working on a pre-forking TCP socket server written in PHP.
The daemon (the parent process), forks some number of children and then waits until it's told to exit and the children are all gone or it receives a signal.
SIGINT and SIGTERM cause it to send SIGTERM to all of the children.
The children set up their own signal handlers: SIGTERM causes a clean exit. SIGUSR1 causes it to dump some status information (just print that it received the signal in the sample code below).
If a child exits unexpectedly, the parent starts a new child unless the exiting flag has been set by the SIGINT handler.
The initial children, forked during daemon initialization, react to signals as expected.
Newly forked children to replace an unexpected child exit, do not respond to signals.
The following code can be used to demonstrate this:
<?php
$children = [];
$exiting = false;
pcntl_async_signals( true );
pcntl_signal( SIGCHLD, 'sigchldHandler' );
pcntl_signal( SIGINT, 'sigintHandler' );
pcntl_signal( SIGTERM, 'sigintHandler' );
// Fork our children.
for( $ii = 0; $ii < 1; $ii++ )
{
startChild();
}
// Forks a single child.
function startChild()
{
global $children;
echo "Parent: starting child\n";
$pid = pcntl_fork();
switch( true )
{
case ( $pid > 0 ):
$children[$pid] = $pid;
break;
case ( $pid === 0 ):
child();
exit( 0 );
default:
die( 'Parent: pcntl_fork() failed' );
break;
}
}
// As long as we have any children...
while( true )
{
if( empty( $children ) ) break;
sleep( 1 );
}
// The child process.
function child()
{
$pid = posix_getpid();
echo "Child $pid: started\n";
sleep( 10 ); // Give us a chance to start strace (4/30/19 08:27)
pcntl_sigprocmask( SIG_SETMASK, [] ); // Make sure nothing is blocked.
pcntl_async_signals( true ); // This may be inherited.
pcntl_signal( SIGINT, SIG_IGN ); // Ignore SIGINT.
pcntl_signal( SIGTERM, function() use ( $pid ) // Exit on SIGTERM.
{
echo "Child $pid: received SIGTERM\n";
exit( 0 );
}, false );
pcntl_signal( SIGUSR1, function() use( $pid ) // Acknowledge SIGUSR1.
{
printf( "Child %d: Received SIGUSR1\n", $pid );
});
// Do "work" here.
while( true )
{
sleep( 60 );
}
}
// Handle SIGCHLD in the parent.
// Start a new child unless we're exiting.
function sigchldHandler()
{
global $children, $exiting;
echo "Parent: received SIGCHLD\n";
while( true )
{
if( ( $pid = pcntl_wait( $status, WNOHANG ) ) < 1 )
{
break;
}
echo "Parent: child $pid exited\n";
unset( $children[$pid] );
if( !$exiting )
{
startChild();
}
}
}
// Handle SIGINT in the parent.
// Set exiting to true and send SIGTERM to all children.
function sigintHandler()
{
global $children, $exiting;
$exiting = true;
echo PHP_EOL;
foreach( $children as $pid )
{
echo "Parent: sending SIGTERM to $pid\n";
posix_kill( $pid, SIGTERM );
}
}
Run this script in a terminal session. The initial output will be similar to this with a different PID:
Parent: starting child
Child 65016: started
From a different terminal session issue a kill command:
# kill -USR1 65016
The child process will display this in the first terminal session:
Child 65016: Received SIGUSR1
The child is receiving and processing signals as expected. Now terminate that first child:
# kill -TERM 65016
The output to the first terminal session will look like this (with different PIDS):
Child 65016: received SIGTERM
Parent: received SIGCHLD
Parent: child 65016 exited
Parent: starting child
Child 65039: started
The new child process will receive but react to any signals at this point except SIGKILL and SIGSTOP which can't be caught.
Sending the parent a SIGINT will cause it to send a SIGTERM to the new child. The child won't get it and parent will wait until the child is forcibly killed before exiting (yes, the production code will include a timeout and SIGKILL any remaining children).
Environment:
- Ubuntu 18.04.2
- macOS Mojave 10.14.3 (same behavior)
- PHP 7.2.17 (cli)
I find myself out of ideas. Thoughts?
EDIT 30-Apr-2019 08:27 PDT:
I have a little more information. I added a sleep( 10 ) right after the 'echo "Child $pid: started\n";' to give me a chance to run strace on the child.
Based on the strace output, it looks like the signals are being delivered, but the child signal handler is not called.
# sudo strace - p 69710
strace: Process 69710 attached
restart_syscall(<... resuming interrupted nanosleep ...>) = 0
rt_sigprocmask( SIG_SETMASK, [], ~[ KILL STOP RTMIN RT_1], 8) = 0
rt_sigaction( SIGINT, {sa_handler = SIG_IGN, sa_mask = [], sa_flags = SA_RESTORER, sa_restorer = 0x7f6e8881cf20}, null, 8) = 0
rt_sigprocmask( SIG_UNBLOCK, [ INT ], null, 8 ) = 0
rt_sigaction( SIGTERM, {sa_handler = 0x55730bdaf2e0, sa_mask = ~[ ILL TRAP ABRT BUS FPE KILL SEGV CONT STOP TSTP TTIN TTOU SYS RTMIN RT_1], sa_flags = SA_RESTORER | SA_INTERRUPT | SA_SIGINFO, sa_restorer = 0x7f6e8881cf20}, null, 8) = 0
rt_sigprocmask( SIG_UNBLOCK, [ TERM ], null, 8 ) = 0
rt_sigaction( SIGUSR1, {sa_handler = 0x55730bdaf2e0, sa_mask = ~[ ILL TRAP ABRT BUS FPE KILL SEGV CONT STOP TSTP TTIN TTOU SYS RTMIN RT_1], sa_flags = SA_RESTORER | SA_RESTART | SA_SIGINFO, sa_restorer = 0x7f6e8881cf20}, null, 8) = 0
rt_sigprocmask( SIG_UNBLOCK, [ USR1 ], null, 8 ) = 0
nanosleep({tv_sec = 60, tv_nsec = 0}, 0x7ffe79859470) = 0
nanosleep({tv_sec = 60, tv_nsec = 0}, {tv_sec = 37, tv_nsec = 840636107}) = ? ERESTART_RESTARTBLOCK( Interrupted by signal)
--- SIGUSR1 {si_signo = SIGUSR1, si_code = SI_USER, si_pid = 69544, si_uid = 1000} ---
rt_sigreturn({mask = []}) = -1 EINTR( Interrupted system call)
rt_sigprocmask( SIG_BLOCK, ~[ RTMIN RT_1], [], 8) = 0
rt_sigprocmask( SIG_SETMASK, [], null, 8 ) = 0
nanosleep({tv_sec = 60, tv_nsec = 0}, 0x7ffe79859470) = 0
I believe the problem is PHP signal handling doesn't work as one may intend to when pcntl_fork is called inside of a registered signal handling function. Since the second child process is created inside of sigchldHandler it won't receive process subsequent signals.
Edit: Unfortunately I don't have any references for this. I've been bashing my head against the wall myself with a similar problem as OP (hence the new account!) and I can't find any definitive answers or explanations for this behavior, just the evidence from manual stub tests. I'd love to know as well (:

PHP SOAP SSL problems

I'm trying to connect to a secure SOAP server using NuSOAP. (I gave the built-in SOAP library a chance, but that was behaving strangely, so I switched to NuSOAP.)
Here's my code:
require('application/libraries/nusoap/nusoap.php');
$soap = new nusoap_client('https://ws.firstdataglobalgateway.com/fdggwsapi/services/order.wsdl', 'wsdl');
$soap->setCredentials('WS'.STORE_NUMBER.'._.1',
PASSWORD,
'certificate',
array(
'sslcertfile' => 'first_data/cert.pem',
'sslkeyfile' => 'first_data/key.pem',
'passphrase' => KEY_PASSPHRASE
)
);
if($err = $soap->getError()) {
die('Error: '.$err);
}
$result = $soap->call('fdggwsapi:FDGGWSApiOrderRequest', array('v1:Transaction' => '1'));
if($soap->fault) {
echo 'Fault! <pre>';
var_dump($result);
echo '</pre>';
} else {
if($err = $soap->getError()) {
die('Error: '.$err);
} else {
echo '<pre>';
var_dump($result);
die('</pre>');
}
}
I get the following error:
Error: wsdl error: Getting https://ws.firstdataglobalgateway.com/fdggwsapi/services/order.wsdl - HTTP ERROR: cURL ERROR: 56: SSL read: error:1409441B:SSL routines:SSL3_READ_BYTES:tlsv1 alert decrypt error, errno 0
url: https://ws.firstdataglobalgateway.com:443/fdggwsapi/services/order.wsdl
content_type:
http_code: 0
header_size: 0
request_size: 163
filetime: -1
ssl_verify_result: 0
redirect_count: 0
total_time: 0.531131
namelookup_time: 0.00121
connect_time: 0.070608
pretransfer_time: 0.305044
size_upload: 0
size_download: 0
speed_download: 0
speed_upload: 0
download_content_length: -1
upload_content_length: 0
starttransfer_time: 0
redirect_time: 0
What could be the possible problems? How could I debug this? I'm rather out of my league here.
Based on the error:
SSL read: error:1409441B:SSL routines:SSL3_READ_BYTES:tlsv1 alert
decrypt error, errno 0
It looks to me like the PHP library is having trouble reading your cert.pem and key.pem files. These files can come in different formats. Apache requires that these be in PKCS12 format and I would guess PHP is the same. You can use a tool called "Keystore Explorer 4.0.1" to verify and convert if necessary.
You can verify the validity of the format of the keys also, using openssl and this command:
C:\Temp> openssl pkcs12 -info -in ksb_cert.p12
With this settings my client working finally
$client = new nusoap_client($wsdlurl,'wdsl');
$client->setUseCURL(true);
$client->useHTTPPersistentConnection();
$client->setCurlOption(CURLOPT_SSL_VERIFYHOST, 0);
$client->setCurlOption(CURLOPT_SSL_VERIFYPEER, 0);
$client->setCurlOption(CURLOPT_RETURNTRANSFER, 1);
$client->setCurlOption(CURLOPT_SSLVERSION,3);

Categories