I'm trying to test in PHP Amazon S3 on my localhost on Ubuntu system but keep getting the same error:
S3::listBuckets(): [35] error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
It is the function to display bucket list.
public function buckets() {
$s3 = $this->getInstance();
/*print_r($this->_s3->listBuckets()); nothing is print else shows error */
return $this->_s3->listBuckets();
}
Here is the Amazon API function that has been called by this function.
public static function listBuckets($detailed = false) {
$rest = new S3Request('GET', '', '');
$rest = $rest->getResponse();
if ($rest->error === false && $rest->code !== 200)
$rest->error = array('code' => $rest->code, 'message' => 'Unexpected HTTP status');
if ($rest->error !== false) {
trigger_error(sprintf("S3::listBuckets(): [%s] %s", $rest->error['code'], $rest->error['message']), E_USER_WARNING);
return false;
}
$results = array();
if (!isset($rest->body->Buckets))
return $results;
if ($detailed) {
if (isset($rest->body->Owner, $rest->body->Owner->ID, $rest->body->Owner->DisplayName))
$results['owner'] = array(
'id' => (string) $rest->body->Owner->ID, 'name' => (string) $rest->body->Owner->ID
);
$results['buckets'] = array();
foreach ($rest->body->Buckets->Bucket as $b)
$results['buckets'][] = array(
'name' => (string) $b->Name, 'time' => strtotime((string) $b->CreationDate)
);
}
else
foreach ($rest->body->Buckets->Bucket as $b)
$results[] = (string) $b->Name;
return $results;
}
It seems that you have changed your PHP version as this bug is occurred several time in PHP 5.4 but it works perfectly in previous versions. You can re-install cURL with Open SSL again.
It is most common error occurred in integration of AWS S3 on localhost.
Check is cURL is enable and Open SSL is also active.
Get file from http://curl.haxx.se/ca/cacert.pem and save it to your libraries at hard drive. Rename it cacert.pem.
Configure curl.cainfo in php.ini with the full path to the file downloaded in step 2.
Restart Apache.
It will be worked perfectly.
Related
I want to transfer to my Amazon S3 bucket an archive of around 10GB, using a PHP script (it's a backup script).
I actually use the following code :
$uploader = new \Aws\S3\MultipartCopy($s3Client, $tmpFilesBackupDirectory, [
'Bucket' => 'MyBucketName',
'Key' => 'backup'.date('Y-m-d').'.tar.gz',
'StorageClass' => $storageClass,
'Tagging' => 'expiration='.$tagging,
'ServerSideEncryption' => 'AES256',
]);
try
{
$result = $uploader->copy();
echo "Upload complete: {$result['ObjectURL']}\n";
}
catch (Aws\Exception\MultipartUploadException $e)
{
echo $e->getMessage() . "\n";
}
My issue is that after few minutes (let's say 10mn), I receive an error message from the apache server : 504 Gateway timeout.
I understand that this error is related to the configuration of my Apache server, but I don't want to increase the timeout of my server.
My idea is to use the PHP SDK Low-Level API to do the following steps:
Use Aws\S3\S3Client::uploadPart() method in order to manually upload 5 parts, and store the response obtained in $_SESSION (I need the ETag values to complete the upload);
Reload the page using header('Location: xxx');
Perform again the first 2 steps for the next 5 parts, until all parts are uploaded;
Finalise the upload using Aws\S3\S3Client::completeMultipartUpload().
I suppose that this should work but before to use this method, I'd like to know if there is an easier way to achieve my goal, for example by using the high-level API...
Any suggestions?
NOTE : I'm not searching for some existing script : my main goal is to learn how to fix this issue :)
Best regards,
Lionel
Why not just use the AWS CLI to copy the file? You can create a script in the CLI and that way everything is AWS native. (Amazon has a tutorial on that.) You can use the scp command:
scp -i Amazonkey.pem /local/path/backupfile.tar.gz ec2-user#Elastic-IP-of-ec2-2:/path/backupfile.tar.gz
From my perspective, it would be easier to do the work within AWS, which has features to move files and data. If you'd like to use a shell script, this article on automating EC2 backups has a good one, plus more detail on backup options.
To answer my own question (I hope it might help someone one day!), he is how I fixed my issue, step by step:
1/ When I load my page, I check if the archive already exists. If not, I create my .tar.gz file and I reload the page using header().
I noticed that this step was quite slow since there is lot of data to archive. That's why I reload my page to avoid any timeout during the next steps!
2/ If the backup file exists, I use AWS MultipartUpload to send 10 chunks of 100MB each. Everytime that a chunk is sent successfully, I update a session variable ($_SESSION['backup']['partNumber']) to know what is the next chunk that needs to be uploaded.
Once my 10 chunks are sent, I reload the page again to avoid any timeout.
3/ I repeat the second step until the upload of all parts is done, using my session variable to know which part of the upload needs to be sent next.
4/ Finally, I complete the multipart upload and I delete the archive stored locally.
You can of course send more than 10 times 100MB before to reload your page. I chose this value to be sure that I won't reach a timeout even if the download is slow. But I guess I could send easilly around 5GB each time without issue.
Note: You cannot redirect you script to itself too much time. There is a limit (I think it's around 20 times for Chrome and Firefoxbefore to get an error, and more for IE). In my case (the archive is around 10GB), transfering 1GB per reload is fine (the page will be reloaded around 10 times). But it the archive size increases, I'll have to send more chunks each time.
Here is my full script. I could surely be improved, but it's working quite well for now and it may help someone having similar issue!
public function backup()
{
ini_set('max_execution_time', '1800');
ini_set('memory_limit', '1024M');
require ROOT.'/Public/scripts/aws/aws-autoloader.php';
$s3Client = new \Aws\S3\S3Client([
'version' => 'latest',
'region' => 'eu-west-1',
'credentials' => [
'key' => '',
'secret' => '',
],
]);
$tmpDBBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.sql.gz';
if(!file_exists($tmpDBBackupDirectory))
{
$this->cleanInterruptedMultipartUploads($s3Client);
$this->createSQLBackupFile();
$this->uploadSQLBackup($s3Client, $tmpDBBackupDirectory);
}
$tmpFilesBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.tar.gz';
if(!isset($_SESSION['backup']['archiveReady']))
{
$this->createFTPBackupFile();
header('Location: '.CURRENT_URL);
}
$this->uploadFTPBackup($s3Client, $tmpFilesBackupDirectory);
unlink($tmpDBBackupDirectory);
unlink($tmpFilesBackupDirectory);
}
public function createSQLBackupFile()
{
// Backup DB
$tmpDBBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.sql.gz';
if(!file_exists($tmpDBBackupDirectory))
{
$return_var = NULL;
$output = NULL;
$dbLogin = '';
$dbPassword = '';
$dbName = '';
$command = 'mysqldump -u '.$dbLogin.' -p'.$dbPassword.' '.$dbName.' --single-transaction --quick | gzip > '.$tmpDBBackupDirectory;
exec($command, $output, $return_var);
}
return $tmpDBBackupDirectory;
}
public function createFTPBackupFile()
{
// Compacting all files
$tmpFilesBackupDirectory = ROOT.'Var/Backups/backup'.date('Y-m-d').'.tar.gz';
$command = 'tar -cf '.$tmpFilesBackupDirectory.' '.ROOT;
exec($command);
$_SESSION['backup']['archiveReady'] = true;
return $tmpFilesBackupDirectory;
}
public function uploadSQLBackup($s3Client, $tmpDBBackupDirectory)
{
$result = $s3Client->putObject([
'Bucket' => '',
'Key' => 'backup'.date('Y-m-d').'.sql.gz',
'SourceFile' => $tmpDBBackupDirectory,
'StorageClass' => '',
'Tagging' => '',
'ServerSideEncryption' => 'AES256',
]);
}
public function uploadFTPBackup($s3Client, $tmpFilesBackupDirectory)
{
$storageClass = 'STANDARD_IA';
$bucket = '';
$key = 'backup'.date('Y-m-d').'.tar.gz';
$chunkSize = 100 * 1024 * 1024; // 100MB
$reloadFrequency = 10;
if(!isset($_SESSION['backup']['uploadId']))
{
$response = $s3Client->createMultipartUpload([
'Bucket' => $bucket,
'Key' => $key,
'StorageClass' => $storageClass,
'Tagging' => '',
'ServerSideEncryption' => 'AES256',
]);
$_SESSION['backup']['uploadId'] = $response['UploadId'];
$_SESSION['backup']['partNumber'] = 1;
}
$file = fopen($tmpFilesBackupDirectory, 'r');
$parts = array();
//Reading parts already uploaded
for($i = 1; $i < $_SESSION['backup']['partNumber']; $i++)
{
if(!feof($file))
{
fread($file, $chunkSize);
}
}
// Uploading next parts
while(!feof($file))
{
do
{
try
{
$result = $s3Client->uploadPart(array(
'Bucket' => $bucket,
'Key' => $key,
'UploadId' => $_SESSION['backup']['uploadId'],
'PartNumber' => $_SESSION['backup']['partNumber'],
'Body' => fread($file, $chunkSize),
));
}
}
while (!isset($result));
$_SESSION['backup']['parts'][] = array(
'PartNumber' => $_SESSION['backup']['partNumber'],
'ETag' => $result['ETag'],
);
$_SESSION['backup']['partNumber']++;
if($_SESSION['backup']['partNumber'] % $reloadFrequency == 1)
{
header('Location: '.CURRENT_URL);
die;
}
}
fclose($file);
$result = $s3Client->completeMultipartUpload(array(
'Bucket' => $bucket,
'Key' => $key,
'UploadId' => $_SESSION['backup']['uploadId'],
'MultipartUpload' => Array(
'Parts' => $_SESSION['backup']['parts'],
),
));
$url = $result['Location'];
}
public function cleanInterruptedMultipartUploads($s3Client)
{
$tResults = $s3Client->listMultipartUploads(array('Bucket' => ''));
$tResults = $tResults->toArray();
if(isset($tResults['Uploads']))
{
foreach($tResults['Uploads'] AS $result)
{
$s3Client->abortMultipartUpload(array(
'Bucket' => '',
'Key' => $result['Key'],
'UploadId' => $result['UploadId']));
}
}
if(isset($_SESSION['backup']))
{
unset($_SESSION['backup']);
}
}
If someone has questions don't hesitate to contact me :)
I have run into a problem with the Context.io API. I keep getting the following error message:
Warning: Invalid argument supplied for foreach() in /usr/share/nginx/html/custom-assets/includes/ContextIO/demo.php on line 11
Here is my php code:
// Require the Context.io PHP Library
require('class.contextio.php');
// See https://console.context.io/#settings to get your consumer key and consumer secret.
$contextIO = new ContextIO('consumer key','consumer secret');
$accountId = 'xxxxxxxxxx'; // Account ID of email account
$args = array('folder' => 'Inbox', 'include_flags' => 1, 'include_thread_size' => 1, 'include_body' => 1, 'limit' => 50);
echo "Getting last 50 messages...<br><br>";
$r = $contextIO->listMessages($accountId, $args);
if ($r !== false) {
foreach ($r->getData() as $message) {
echo "Subject: ".$message['subject']."<br>";
}
} else {
var_dump($r);
}
I have no clue why this is not working. Anybody have any clue why?
All of the functions in the ContextIO library return a ContextIOResponse object, or false if the API returns an http error code. If you add a check for if($r !== false) before you call getData() it should catch any errors.
I have some old code that inserts a csv file into a Google Drive account and it was opened by Google Spreadsheets by default.
Couple days ago I started to receive this error message:
There is some problems with the Google connection. Please, try again.
(Error calling POST
https://www.googleapis.com/upload/drive/v2/files?convert=false&uploadType=multipart&key=XXXXXXXXXXXXXXX:
(400) Invalid mime type provided)
Here is some of the code:
public function createDriveFile($tmpFilePath,$title,$fileMimetype = '',$description = '',$googleDocmimeType = ''){
$file = new Google_DriveFile();
$file->setTitle($title);
$file->setDescription($description);
$data = file_get_contents($tmpFilePath);
$optParams = array('data' => $data);
if($fileMimetype != '' ) {
if($fileMimetype === 'text/csv' ) {
$optParams['convert'] = false;
$optParams['mimeType'] = 'application/vnd.google-apps.spreadsheet';
} else {
$optParams['convert'] = true;
}
$file->setMimeType( $fileMimetype );
}
if ($googleDocmimeType != ''){
$optParams['mimeType'] = $googleDocmimeType;
}
$createdFile = $this->_driveService->files->insert($file, $optParams);
return $createdFile;
}
I'm pretty sure that it is sending 'application/vnd.google-apps.spreadsheet' as mimetype.
Here a call to that function:
$createFileInfo = $googleOauth->createDriveFile('/tmp/file.csv', 'file.csv','text/csv');
I made some tests changing the value of the 'convert' parameter to true and didn't work.
You should be sending it with the mimeType of text/csv. Google Sheets will still be able to open it, so there is no need to misrepresent it as a true Google Spreadsheet.
when i take a look at the paypal documentation, they say "Note that the PayPal SDK for PHP does not require SSL encryption".
https://developer.paypal.com/docs/classic/api/apiCredentials/#encrypting-your-certificate
Is the statement of this phrase, that i don't have to create a p12 certificate when working with php, but use the public_key.pem and paypal_public_key.pem?
If yes:
Is it secure enough to create the encrypted form input elements without p12 certificate?
If no:
What do they mean? :-)
Before this question came up, i've tested this little programm.
http://www.softarea51.com/blog/how-to-integrate-your-custom-shopping-cart-with-paypal-website-payments-standard-using-php/
There is a config file paypal-wps-config.inc.php where i can define the paths to my certificates.
// tryed to use // 'paypal_cert.p12 ';
$config['private_key_path'] = '/home/folder/.cert/pp/prvkey.pem';
// must match the one you set when you created the private key
$config['private_key_password'] = ''; //'my_password';
When i try to use the p12 certificate, openssl_error_string() returns "Could not sign data: error:0906D06C:PEM routines:PEM_read_bio:no start line openssl_pkcs7_sign
When i instead use the prvkey.pem without password all works fine.
Here is the function, which signs and encrypt the data.
function signAndEncrypt($dataStr_, $ewpCertPath_, $ewpPrivateKeyPath_, $ewpPrivateKeyPwd_, $paypalCertPath_)
{
$dataStrFile = realpath(tempnam('/tmp', 'pp_'));
$fd = fopen($dataStrFile, 'w');
if(!$fd) {
$error = "Could not open temporary file $dataStrFile.";
return array("status" => false, "error_msg" => $error, "error_no" => 0);
}
fwrite($fd, $dataStr_);
fclose($fd);
$signedDataFile = realpath(tempnam('/tmp', 'pp_'));
**// here the error came from**
if(!#openssl_pkcs7_sign( $dataStrFile,
$signedDataFile,
"file://$ewpCertPath_",
array("file://$ewpPrivateKeyPath_", $ewpPrivateKeyPwd_),
array(),
PKCS7_BINARY)) {
unlink($dataStrFile);
unlink($signedDataFile);
$error = "Could not sign data: ".openssl_error_string();
return array("status" => false, "error_msg" => $error, "error_no" => 0);
}
unlink($dataStrFile);
$signedData = file_get_contents($signedDataFile);
$signedDataArray = explode("\n\n", $signedData);
$signedData = $signedDataArray[1];
$signedData = base64_decode($signedData);
unlink($signedDataFile);
$decodedSignedDataFile = realpath(tempnam('/tmp', 'pp_'));
$fd = fopen($decodedSignedDataFile, 'w');
if(!$fd) {
$error = "Could not open temporary file $decodedSignedDataFile.";
return array("status" => false, "error_msg" => $error, "error_no" => 0);
}
fwrite($fd, $signedData);
fclose($fd);
$encryptedDataFile = realpath(tempnam('/tmp', 'pp_'));
if(!#openssl_pkcs7_encrypt( $decodedSignedDataFile,
$encryptedDataFile,
file_get_contents($paypalCertPath_),
array(),
PKCS7_BINARY)) {
unlink($decodedSignedDataFile);
unlink($encryptedDataFile);
$error = "Could not encrypt data: ".openssl_error_string();
return array("status" => false, "error_msg" => $error, "error_no" => 0);
}
unlink($decodedSignedDataFile);
$encryptedData = file_get_contents($encryptedDataFile);
if(!$encryptedData) {
$error = "Encryption and signature of data failed.";
return array("status" => false, "error_msg" => $error, "error_no" => 0);
}
unlink($encryptedDataFile);
$encryptedDataArray = explode("\n\n", $encryptedData);
$encryptedData = trim(str_replace("\n", '', $encryptedDataArray[1]));
return array("status" => true, "encryptedData" => $encryptedData);
} // signAndEncrypt
} // PPCrypto
The main questions:
Is it possible to use p12 cert with php, or is it secure enough to work without it?
Why i become an error when using openssl_pkcs7_sign
Please help.
Greetings
ninchen
You should not confuse 'using SSL' with 'using SSL with a predefined client certificate'. The document you link to describes the latter. Simply calling an https URL will enable SSL and deliver browser-equivalent security. This is done by the SDK automatically.
Predefined client certificates guard against a sophisticated attacker performing a man-in-the-middle attack. Both methods will stop an unsophisticated attacker from reading your network traffic directly.
Client certificates also serve to authenticate you to PayPal, as an alternate for user/password/signature.
I am trying to download a rapidshare file using its "download" subroutine as a free user. The following is the code that I use to get response from the subroutine.
function rs_download($params)
{
$url = "http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=download&fileid=".$params['fileid']."&filename=".$params['filename'];
$reply = #file_get_contents($url);
if(!$reply)
{
return false;
}
$result_arr = array();
$result_keys = array(0=> 'hostname', 1=>'dlauth', 2=>'countdown_time', 3=>'md5hex');
if( preg_match("/DL:(.*)/", $reply, $reply_matches) )
{
$reply_altered = $reply_matches[1];
}
else
{
return false;
}
foreach( explode(',', $reply_altered) as $index => $value )
{
$result_arr[ $result_keys[$index] ] = $value;
}
return $result_arr;
}
For instance; trying to download this...
http://rapidshare.com/files/440817141/AutoRun__live-down.com_Champ.rar
I pass the fileid(440817141) and filename(AutoRun__live-down.com_Champ.rar) to rs_download(...) and I get a response just as rapidshare's api doc says.
The rapidshare api doc (see "sub=download") says call the server hostname with the download authentication string but I couldn't figure out what form the url should take.
Any suggestions?, I tried
$download_url = "http://$the-hostname/$the-dlauth-string/files/$fileid/$filename"
and a couple other variations of the above, nothing worked.
I use curl to download the file, like the following;
$cr = curl_init();
$fp = fopen ("d:/downloaded_files/file1.rar", "w");
// set curl options
$curl_options = array(
CURLOPT_URL => $download_url
,CURLOPT_FILE => $fp
,CURLOPT_HEADER => false
,CURLOPT_CONNECTTIMEOUT => 0
,CURLOPT_FOLLOWLOCATION => true
);
curl_setopt_array($cr, $curl_options);
curl_exec($cr);
curl_close($cr);
fclose($fp);
The above curl code doesn't seem to work, nothing gets downloaded. Probably its the download url that is incorrect.
Also tried this format for the download url:
"http://rs$serverid$shorthost.rapidshare.com/files/$fileid/$filename"
With this curl writes a file entry but that is all it does(writes a 0/1 kb file).
Here is the code that I use to get the serverid, shorthost, among a few other values from rapidshare.
function rs_checkfile($params)
{
$url = "http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=checkfiles_v1&files=".$params['fileids']."&filenames=".$params['filenames'];
// the response from rapishare would a string something like:
// 440817141,AutoRun__live-down.com_Champ.rar,47768,20,1,l3,0
$reply = #file_get_contents($url);
if(!$reply)
{
return false;
}
$result_arr = array();
$result_keys = array(0=> 'file_id', 1=>'file_name', 2=>'file_size', 3=>'server_id', 4=>'file_status', 5=>'short_host'
, 6=>'md5');
foreach( explode(',', $reply) as $index => $value )
{
$result_arr[ $result_keys[$index] ] = $value;
}
return $result_arr;
}
rs_checkfile(...) takes comma seperated fileids and filenames(no commas if calling for a single file)
Thanks in advance for any suggestions.
You start by requesting ?sub=download&fileid=X&filename=Y, and it returns $hostname,$dlauth,$countdown,$md5hex.. since you're a free user you have to delay for $countdown seconds, and then call ?sub=download&fileid=X&filename=Y&dlauth=Z to perform the download.
There's a working implementation in python here that would probably answer any of your other questions.