I have been trying for weeks to properly format a REST request to the Amazon AWS S3 API using the available examples on the web but have been unable to even successfully connect.
I have found the code to generate a signature, found the proper method for formatting the "string to encode", and the http headers. I have worked my way through the signatureDoesNotMatch errors just to get a Anonymous users can not perform copy functions, Please authenticate message.
I have a working copy of an Adobe Flex application that successfully uploads files, but with their "original" file name. The point of using the REST with the Amazon API is to perform a PUT (copy) of the file, just so I can rename it to something my back end system can use.
If I could find a way to get this REST submission to work, or perhaps a way to specify a "new" filename within Flex while uploading I could avoid this whole REST situation all together.
If anyone has successfully performed a PUT/Copy command on the Amazon API via REST I would be very interested in how this was accomplished - OR - if anyone has been able to change the destination file name using the Flex fileReference.browse() method I would also be eternally grateful for any pointers.
PHP code for this is as follows:
$aws_key = 'removed_for_security';
$aws_secret = 'removed_for_security';
$source_file = $uploaded_s3_file; // file to upload to S3 (defined in above script)
$aws_bucket = 'bucket'; // AWS bucket
$aws_object = $event_file_name; // AWS object name (file name)
if (strlen($aws_secret) != 40) die("$aws_secret should be exactly 40 bytes long");
$file_data = file_get_contents($source_file);
if ($file_data == false) die("Failed to read file " . $source_file);
// opening HTTP connection to Amazon S3
$fp = fsockopen("s3.amazonaws.com", 80, $errno, $errstr, 30);
if (!$fp) die("$errstr ($errno)\n");
// Uploading object
$file_length = strlen($file_data); // for Content-Length HTTP field
$dt = gmdate('r'); // GMT based timestamp
// preparing String to Sign (see AWS S3 Developer Guide)
// preparing string to sign
$string2sign = "PUT
{$dt}
/{$aws_bucket}/{$aws_object}";
// preparing HTTP query
// $query = "PUT /".$aws_bucket."/".$event_file_name." HTTP/1.1
$query = "PUT /" . $event_file_name . " HTTP/1.1
Host: {$aws_bucket}.s3.amazonaws.com
Date: {$dt}
x-amz-copy-source: /{$aws_bucket}/{$current_s3_filename}
x-amz-acl: public-read
Authorization: AWS {$aws_key}:" . amazon_hmac($string2sign) . "\n\n";
$query .= $file_data;
$resp = sendREST($fp, $query);
if (strpos($resp, '') !== false) {
die($resp);
}
echo "FILE uploaded\n";
// done
echo "Your file's URL is: http://s3.amazonaws.com/{$aws_bucket}/{$aws_object}\n";
fclose($fp);
// Sending HTTP query and receiving, with trivial keep-alive support
function sendREST($fp, $q, $debug = true){
if ($debug) echo "\nQUERY<<{$q}>>\n";
fwrite($fp, $q);
$r = '';
$check_header = true;
while (!feof($fp)) {
$tr = fgets($fp, 256);
if ($debug) echo "\nRESPONSE<<{$tr}>>";
$r .= $tr;
if (($check_header) && (strpos($r, "\r\n\r\n") !== false)) {
// if content-length == 0, return query result
if (strpos($r, 'Content-Length: 0') !== false) {
return $r;
}
}
// Keep-alive responses does not return EOF
// they end with \r\n0\r\n\r\n string
if (substr($r, -7) == "\r\n0\r\n\r\n") {
return $r;
}
}
return $r;
}
// hmac-sha1 code START
// hmac-sha1 function: assuming key is global $aws_secret 40 bytes long
// read more at http://en.wikipedia.org/wiki/HMAC
// warning: key($aws_secret) is padded to 64 bytes with 0x0 after first function call
function amazon_hmac($stringToSign) {
// helper function binsha1 for amazon_hmac (returns binary value of sha1 hash)
if (!function_exists('binsha1')) {
if (version_compare(phpversion(), "5.0.0", ">=")) {
function binsha1($d) { return sha1($d, true); }
} else {
function binsha1($d) { return pack('H*', sha1($d)); }
}
}
global $aws_secret;
if (strlen($aws_secret) == 40) {
$aws_secret = $aws_secret . str_repeat(chr(0), 24);
}
$ipad = str_repeat(chr(0x36), 64);
$opad = str_repeat(chr(0x5c), 64);
$hmac = binsha1(($aws_secret ^ $opad) . binsha1(($aws_secret ^ $ipad) . $stringToSign));
return base64_encode($hmac);
}
// hmac-sha1 code END
When I submit a malformed or incorrect header I get the corresponding error message as expected:
Query:
PUT /bucket/1-132-1301047200-1.jpg HTTP/1.1 Host: s3.amazonaws.com x-amz-acl: public-read Connection: keep-alive Content-Length: 34102 Date: Sat, 26 Mar 2011 00:43:36 +0000 Authorization: AWS -removed for security-:GmgRObHEFuirWPwaqRgdKiQK/EQ=
HTTP/1.1 403 Forbidden
x-amz-request-id: A7CB0311812CD721
x-amz-id-2: ZUY0mH4Q20Izgt/9BNhpJl9OoOCp59DKxlH2JJ6K+sksyxI8lFtmJrJOk1imxM/A
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Sat, 26 Mar 2011 00:43:36 GMT
Connection: close
Server: AmazonS3
397 SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method.50 55 54 0a 0a 0a 53 61 74 2c 20 32 36 20 4d 61 72 20 32 30 31 31 20 30 30 3a 34 33 3a 33 36 20 2b 30 30 30 30 0a 78 2d 61 6d 7a 2d 61 63 6c 3a 70 75 62 6c 69 63 2d 72 65 61 64 0a 2f 6d 6c 68 2d 70 72 6f 64 75 63 74 69 6f 6e 2f 31 2d 31 33 32 2d 31 33 30 31 30 34 37 32 30 30 2d 31 2e 6a 70 67A7CB0311812CD721ZUY0mH4Q20Izgt/9BNhpJl9OoOCp59DKxlH2JJ6K+sksyxI8lFtmJrJOk1imxM/AGmgRObHEFuirWPwaqRgdKiQK/EQ=PUT Sat, 26 Mar 2011 00:43:36 +0000 x-amz-acl:public-read /bucket/1-132-1301047200-1.jpg-removed for security- 0
but when sending properly formatted requests, it says I'm not authenticated:
Query being used:
PUT /1-132-1301047200-1.jpg HTTP/1.1 Host: bucket.s3.amazonaws.com Date: Sat, 26 Mar 2011 00:41:50 +0000 x-amz-copy-source: /bucket/clock.jpg x-amz-acl: public-read Authorization: AWS -removed for security-:BMiGhgbFnVAJyiderKjn1cT7cj4=
HTTP/1.1 403 Forbidden
x-amz-request-id: ABE45FD4DFD19927
x-amz-id-2: CnkMmoF550H1zBlrwwKfN8zoOSt7r/zud8mRuLqzzBrdGguotcvrpZ3aU4HR4RoO
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Sat, 26 Mar 2011 00:41:50 GMT
Server: AmazonS3
AccessDenied
Anonymous users cannot copy objects. Please authenticate
ABE45FD4DFD19927CnkMmoF550H1zBlrwwKfN8zoOSt7r/zud8mRuLqzzBrdGguotcvrpZ3aU4HR4RoO 0
Date: Sat, 26 Mar 2011 00:41:50 GMT
Connection: close
Server: AmazonS3
I have been trying for weeks to properly format a REST request to the Amazon AWS S3 API using the available examples on the web
Have you tried the Amazon AWS SDK for PHP? It's comprehensive, complete, and most importantly, written by Amazon. If their own code isn't working for you, something's gonna be really wrong.
Here is example code using the linked SDK to upload example.txt in the current directory to a bucket named 'my_very_first_bucket'.
<?php
// Complain wildly.
ini_set('display_errors', true);
error_reporting(-1);
// Set these yourself.
define('AWS_KEY', '');
define('AWS_SECRET_KEY', '');
// We'll assume that the SDK is in our current directory
include_once 'sdk-1.3.1/sdk.class.php';
include_once 'sdk-1.3.1/services/s3.class.php';
// Set the bucket and name of the file we're sending.
// It happens that we're actually uploading the file and
// keeping the name, so we're re-using the variable
// below.
$bucket_name = 'my_very_first_bucket';
$file_to_upload = 'example.txt';
// Fire up the object
$s3 = new AmazonS3(AWS_KEY, AWS_SECRET_KEY);
// This returns a "CFResponse"
$r = $s3->create_object(
$bucket_name,
$file_to_upload,
array(
// Filename of the thing we're uploading
'fileUpload' => (__DIR__ . '/' . $file_to_upload),
// ACL'd public.
'acl' => AmazonS3::ACL_PUBLIC,
// No wai.
'contentType' => 'text/plain',
// The docs say it'll guess this, but may as well.
'length' => filesize(__DIR__ . '/' . $file_to_upload)
)
);
// Did it work?
echo "Worked: ";
var_dump($r->isOK());
// Status as in HTTP.
echo "\nStatus: ";
var_dump($r->status);
// The public URL by which we can reach this object.
echo "\nURL: ";
echo $s3->get_object_url($bucket_name, $file_to_upload);
// Tada!
echo "\n";
Appropriate API docs:
get_object_url
create_object
The CFResponse class.
You can navigate the rest of the methods in the left menu. It's pretty comprehensive, including new bucket creation, management, deletion, same for objects, etc.
You should be able to basically drop this in to your code and have it work properly. PHP 5.2-safe.
Edit by Silver Tiger:
Charles -
The method you provide is using the API SDK functions to upload a file from the local file system to a bucket of my choosing. I have that part working already via Flex and uploads work like a charm. The problem in question is being able to submit a REST request to AWS S3 to change the file name from it's current "uploaded" name, to a new name more suited name that will work with my back end (database, tracking etc, which I handle and display seperately in PHP with MyySQL).
AWS S3 does not truly support a "copy" function, so they provided a method to re-"PUT" a file by reading the source from your own bucket and placing a new copy with a different name in the same bucket. The difficulty I have been having is processing the REST request, hence the HMAC encryption.
I do appreciate your time and understand the example you have provided as i also have a working copy of the PHP upload that was functioning before I designed the Flex application. The reason for the Flex was to enable status updates and a dynamically updated progress bar, which is also working like a charm :).
I will continue to persue a REST solution as from the perspective of Amason zupport, it will be the only way i can rename a file already existing in my bucket per thier support team.
As always, if you have input or suggestions regarding the REST submission I would be greatful for any feedback.
Thanks,
Silver Tiger
Proof copy/delete works:
$r = $s3->copy_object(
array( 'bucket' => $bucket_name, 'filename' => $file_to_upload ),
array( 'bucket' => $bucket_name, 'filename' => 'foo.txt' )
);
// Did it work?
echo "Worked: ";
var_dump($r->isOK());
// Status as in HTTP.
echo "\nStatus: ";
var_dump($r->status);
// The public URL by which we can reach this object.
echo "\nURL: ";
echo $s3->get_object_url($bucket_name, 'foo.txt');
echo "\nDelete: ";
// Nuke?
$r = $s3->delete_object($bucket_name, $file_to_upload);
// Did it work?
echo "Worked: ";
var_dump($r->isOK());
// Status as in HTTP.
echo "\nStatus: ";
var_dump($r->status);
Edit by Silver Tiger:
Charles -
No REST needed, no bothers ... SDK 1.3.1 and your help solved the issue. the code I used to test looks a lot like yours :
// Complain wildly.
ini_set('display_errors', true);
error_reporting(-1);
// Set these yourself.
define('AWS_KEY', 'removed for security');
define('AWS_SECRET_KEY', 'removed for security');
// We'll assume that the SDK is in our current directory
include_once 'includes/sdk-1.3.1/sdk.class.php';
include_once 'includes/sdk-1.3.1/services/s3.class.php';
// Set the bucket and name of the file we're sending.
// It happens that we're actually uploading the file and
// keeping the name, so we're re-using the variable
// below.
$bucket = 'bucket';
$file_to_upload = 'example.txt';
$Source_file_to_copy = 'Album.jpg';
$Destination_file = 'Album2.jpg';
// Fire up the object
// Instantiate the class
$s3 = new AmazonS3();
$response = $s3->copy_object(
array( // Source
'bucket' => $bucket,
'filename' => $Source_file_to_copy
),
array( // Destination
'bucket' => $bucket,
'filename' => $Destination_file
)
);
// Success?
var_dump($response->isOK());
Now I will implement the delete after the copy, and we're golden. Thank you sir for your insight and help.
Silver Tiger
Related
I have the following code, this was previously working and now all of a sudden I am getting an error;
The error I am getting is;
Failed to connect to server Server responed with: Server did not accept to upgrade connection to websocket.HTTP/1.1 200 OK Date: Sun, 22 Aug 2021 01:07:27 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive Last-Modified: Fri, 05 Mar 2021 07:33:32 GMT X-By: #XRPLF X-Upgrade: WebSocket X-Conn: upgrade CF-Cache-Status: DYNAMIC Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct" Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=ahbUvdpOxo1wZb%2B54qo5pEWE0KGc%2BTpWu2vgw47WhbCgjbfPwdQOGLCAZlivJyijhHs4PTt4nYVIW3ak%2BwAtlz6qhz36saYBmLZ3%2FyKJc8ZB6OJA0%2FNVp14%3D"}],"group":"cf-nel","max_age":604800} NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800} Server: cloudflare CF-RAY: 682834517edc2ce3-LHR alt-svc: h3-27=":443"; ma=86400, h3-28=":443"; ma=86400, h3-29=":443"; ma=86400, h3=":443"; ma=86400 6980
I am not too sure what is causing it below is the code;
<?php
include('/websocket_client.php');
$server = 'xrpl.ws';
$command = json_encode(array(
'id' => 2,
'command' => "server_info"
));
if( $sp = websocket_open($server, 443,'',$errstr, 10, true) ) {
websocket_write($sp,$command);
$result = websocket_read($sp,$errstr);
}else {
echo "Failed to connect to server\n";
echo "Server responed with: $errstr\n";
}
$result_data = json_decode($result, true);
echo '<pre>';
echo $result_data;
echo '</pre>';
?>
Below is the Websocket_Client.php page I am sorry for the length ; But I thought it might be important to include it all.
<?php
/*----------------------------------------------------------------------------*\
Websocket client - https://github.com/paragi/PHP-websocket-client
By Paragi 2013, Simon Riget MIT license.
This is a demonstration of a websocket clinet.
If you find flaws in it, please let me know at simon.riget (at) gmail
Websockets use hybi10 frame encoding:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-------+-+-------------+-------------------------------+
|F|R|R|R| opcode|M| Payload len | Extended payload length |
|I|S|S|S| (4) |A| (7) | (16/63) |
|N|V|V|V| |S| | (if payload len==126/127) |
| |1|2|3| |K| | |
+-+-+-+-+-------+-+-------------+ - - - - - - - - - - - - - - - +
| Extended payload length continued, if payload len == 127 |
+ - - - - - - - - - - - - - - - +-------------------------------+
| |Masking-key, if MASK set to 1 |
+-------------------------------+-------------------------------+
| Masking-key (continued) | Payload Data |
+-------------------------------- - - - - - - - - - - - - - - - +
: Payload Data continued ... :
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
| Payload Data continued ... |
+---------------------------------------------------------------+
See: https://tools.ietf.org/rfc/rfc6455.txt
or: http://tools.ietf.org/html/draft-ietf-hybi-thewebsocketprotocol-10#section-4.2
\*----------------------------------------------------------------------------*/
/*============================================================================*\
Open websocket connection
resource websocket_open(string $host [,int $port [,$additional_headers [,string &error_string ,[, int $timeout]]]]
host
A host URL. It can be a domain name like www.example.com or an IP address,
with port number. Local host example: 127.0.0.1:8080
port
headers (optional)
additional HTTP headers to attach to the request.
For example to parse a session cookie: "Cookie: SID=" . session_id()
error_string (optional)
A referenced variable to store error messages, i any
timeout (optional)
The maximum time in seconds, a read operation will wait for an answer from
the server. Default value is 10 seconds.
ssl (optional)
persistant (optional)
path (optional)
Context (optional)
Open a websocket connection by initiating a HTTP GET, with an upgrade request
to websocket.
If the server accepts, it sends a 101 response header, containing
"Sec-WebSocket-Accept"
\*============================================================================*/
function websocket_open($host='',$port=80,$headers='',&$error_string='',$timeout=10,$ssl=false, $persistant = false, $path = '/', $context = null){
// Generate a key (to convince server that the update is not random)
// The key is for the server to prove it i websocket aware. (We know it is)
$key=base64_encode(openssl_random_pseudo_bytes(16));
$header = "GET " . $path . " HTTP/1.1\r\n"
."Host: $host\r\n"
."pragma: no-cache\r\n"
."Upgrade: WebSocket\r\n"
."Connection: Upgrade\r\n"
."Sec-WebSocket-Key: $key\r\n"
."Sec-WebSocket-Version: 13\r\n";
// Add extra headers
if(!empty($headers)) foreach($headers as $h) $header.=$h."\r\n";
// Add end of header marker
$header.="\r\n";
// Connect to server
$host = $host ? $host : "127.0.0.1";
$port = $port <1 ? ( $ssl ? 443 : 80 ): $port;
$address = ($ssl ? 'ssl://' : '') . $host . ':' . $port;
$flags = STREAM_CLIENT_CONNECT | ( $persistant ? STREAM_CLIENT_PERSISTENT : 0 );
$ctx = $context ?? stream_context_create();
$sp = stream_socket_client($address, $errno, $errstr, $timeout, $flags, $ctx);
if(!$sp){
$error_string = "Unable to connect to websocket server: $errstr ($errno)";
return false;
}
// Set timeouts
stream_set_timeout($sp,$timeout);
if (!$persistant or ftell($sp) === 0) {
//Request upgrade to websocket
$rc = fwrite($sp,$header);
if(!$rc){
$error_string
= "Unable to send upgrade header to websocket server: $errstr ($errno)";
return false;
}
// Read response into an assotiative array of headers. Fails if upgrade failes.
$reaponse_header=fread($sp, 1024);
// status code 101 indicates that the WebSocket handshake has completed.
if (stripos($reaponse_header, ' 101 ') === false
|| stripos($reaponse_header, 'Sec-WebSocket-Accept: ') === false) {
$error_string = "Server did not accept to upgrade connection to websocket."
.$reaponse_header. E_USER_ERROR;
return false;
}
// The key we send is returned, concatenate with "258EAFA5-E914-47DA-95CA-
// C5AB0DC85B11" and then base64-encoded. one can verify if one feels the need...
}
return $sp;
}
/*============================================================================*\
Write to websocket
int websocket_write(resource $handle, string $data ,[boolean $final])
Write a chunk of data through the websocket, using hybi10 frame encoding
handle
the resource handle returned by websocket_open, if successful
data
Data to transport to server
final (optional)
indicate if this block is the final data block of this request. Default true
binary (optional)
indicate if this block is sent in binary or text mode. Default true/binary
\*============================================================================*/
function websocket_write($sp,$data,$final=true,$binary=true){
// Assemble header: FINal 0x80 | Mode (0x02 binary, 0x01 text)
if ($binary)
$header=chr(($final?0x80:0) | 0x02); // 0x02 binary mode
else
$header=chr(($final?0x80:0) | 0x01); // 0x01 text mode
// Mask 0x80 | payload length (0-125)
if(strlen($data)<126) $header.=chr(0x80 | strlen($data));
elseif (strlen($data)<0xFFFF) $header.=chr(0x80 | 126) . pack("n",strlen($data));
else $header.=chr(0x80 | 127) . pack("N",0) . pack("N",strlen($data));
// Add mask
$mask=pack("N",rand(1,0x7FFFFFFF));
$header.=$mask;
// Mask application data.
for($i = 0; $i < strlen($data); $i++)
$data[$i]=chr(ord($data[$i]) ^ ord($mask[$i % 4]));
return fwrite($sp,$header.$data);
}
/*============================================================================*\
Read from websocket
string websocket_read(resource $handle [,string &error_string])
read a chunk of data from the server, using hybi10 frame encoding
handle
the resource handle returned by websocket_open, if successful
error_string (optional)
A referenced variable to store error messages, i any
Read
Note:
- This implementation waits for the final chunk of data, before returning.
- Reading data while handling/ignoring other kind of packages
\*============================================================================*/
function websocket_read($sp,&$error_string=NULL){
$data="";
do{
// Read header
$header=fread($sp,2);
if(!$header){
$error_string = "Reading header from websocket failed.";
return false;
}
$opcode = ord($header[0]) & 0x0F;
$final = ord($header[0]) & 0x80;
$masked = ord($header[1]) & 0x80;
$payload_len = ord($header[1]) & 0x7F;
// Get payload length extensions
$ext_len = 0;
if($payload_len >= 0x7E){
$ext_len = 2;
if($payload_len == 0x7F) $ext_len = 8;
$header=fread($sp,$ext_len);
if(!$header){
$error_string = "Reading header extension from websocket failed.";
return false;
}
// Set extented paylod length
$payload_len= 0;
for($i=0;$i<$ext_len;$i++)
$payload_len += ord($header[$i]) << ($ext_len-$i-1)*8;
}
// Get Mask key
if($masked){
$mask=fread($sp,4);
if(!$mask){
$error_string = "Reading header mask from websocket failed.";
return false;
}
}
// Get payload
$frame_data='';
while($payload_len>0){
$frame= fread($sp,$payload_len);
if(!$frame){
$error_string = "Reading from websocket failed.";
return false;
}
$payload_len -= strlen($frame);
$frame_data.=$frame;
}
// Handle ping requests (sort of) send pong and continue to read
if($opcode == 9){
// Assamble header: FINal 0x80 | Opcode 0x0A + Mask on 0x80 with zero payload
fwrite($sp,chr(0x8A) . chr(0x80) . pack("N", rand(1,0x7FFFFFFF)));
continue;
// Close
} elseif($opcode == 8){
fclose($sp);
// 0 = continuation frame, 1 = text frame, 2 = binary frame
}elseif($opcode < 3){
// Unmask data
$data_len=strlen($frame_data);
if($masked)
for ($i = 0; $i < $data_len; $i++)
$data.= $frame_data[$i] ^ $mask[$i % 4];
else
$data.= $frame_data;
}else
continue;
}while(!$final);
return $data;
}
?>
I thought initially I had been blacklisted or something for the amount of requests I was making and the fact that it in the message I saw something about CloudFlare but I used a VPN to navigate to xrpl.ws via the IP of the host and I was able to access this without problems. I have not made any changes to the PHP ini file either so I really am stuck to what is causing this. Thanks for any help and sorry for the length of the examples. Thanks again.
I did see this previous answer which mentioned about the way the key is generated but I looked into it and I believe its using a good generation method. So I really am at a loss.
The error message is pretty clear:
Server did not accept to upgrade connection to websocket.
You'd need Ratchet, because there likely is no web-socket support available on this server.
Or it may send out unexpected HTTP headers.
I have a laravel system and I am storing one of my response in a particular file like this :
$objFile = new Filesystem();
$path = "files/FileName.php";
$string = $this->getSlides();
if ($objFile->exists($path))
{
$objFile->put($path,"",$lock = false);
$objFile->put($path,$string,$lock = false);
$objFile->getRequire($path);
}
else
return getcwd() . "\n";
Now i get the contents using the following lines:
$objFile = new Filesystem();
$path = "files/FileName.php";
if ($objFile->exists($path))
{
return Response::json([$objFile->getRequire($path)],200);
}
else
return getcwd() . "\n";
Now what's happening is that when I store the file on the server it adds some header like :
HTTP/1.0 200 OK
Cache-Control: no-cache
Content-Type: application/json
Date: Thu, 10 Nov 2016 05:38:08 GMT
followed by my stored file , so when I call the file on my frontend , I get the following error :
SyntaxError: Unexpected token H in JSON at position 0(…)
of course it expects from me a json value however i m giving it something that doesn't start like 1. Any idea how i can remove that on php level?
You are using wrong method for getting file content.
$objFile->getRequire($path) - it will execute following code chunk: return require $path;. You have to use $objFile->get($path) to get file's content.
Sorry I made a bit of blunder while retrieving the result:
In the getSlides method i was using :
return Response::json([
'Data' => $final_array
],200);
Now i replaced it with :
$final_array2 = array(
'Data' => $final_array
);
return json_encode($final_array2);
Sorry because i didn't put this code and hence no one could have got any idea
I have a file uploaded in AWS s3 bucket and set that file to public permission . i want to share that file in my Facebook .. the thing is i can just copy that public link and share it . but i also want the count of the downloads to stored .. in other way i want to host a php file in my web hosting where there will be a tab like bar in which that file name,file size, download link and total download count will be there . Please help me with the code
I tried the following code which i got from google search but no use
<?php
$aws_key = '_YOUR_AWS_KEY_000000';
$aws_secret = '_your_aws_secret_00000000000000000000000';
$aws_bucket = 'anyexample-test'; // AWS bucket
$aws_object = 'test.png'; // AWS object name (file name)
if (strlen($aws_secret) != 40) die("$aws_secret should be exactly 40 bytes long");
$dt = gmdate('r'); // GMT based timestamp
// preparing string to sign
$string2sign = "GET
{$dt}
/{$aws_bucket}/{$aws_object}";
// preparing HTTP query
$query = "GET /{$aws_bucket}/{$aws_object} HTTP/1.1
Host: s3.amazonaws.com
Connection: close
Date: {$dt}
Authorization: AWS {$aws_key}:".amazon_hmac($string2sign)."\n\n";
echo "Downloading: http://s3.amazonaws.com/{$aws_bucket}/{$aws_object}\n";
list($header, $resp) = downloadREST($fp, $query);
echo "\n\n";
if (strpos($header, '200 OK') === false) // checking for error
die($header."\r\n\r\n".$resp);
$aws_object_fs = str_replace('/', '_', $aws_object);
// AWS object may contain slashes. We're replacing them with underscores
#$fh = fopen($aws_object_fs, 'wb');
if ($fh == false)
die("Can't open file {$aws_object_fs} for writing. Fatal error!\n");
echo "Saving data to {$aws_object_fs}...\n";
fwrite($fh, $resp);
fclose($fh);
// Sending HTTP query, without keep-alive support
function downloadREST($fp, $q)
{
// opening HTTP connection to Amazon S3
// since there is no keep-alive we open new connection for each request
$fp = fsockopen("s3.amazonaws.com", 80, $errno, $errstr, 30);
if (!$fp) die("$errstr ($errno)\n"); // connection failed, pity
fwrite($fp, $q); // sending query
$r = ''; // buffer for result
$check_header = true; // header check flag
$header_end = 0;
while (!feof($fp)) {
$r .= fgets($fp, 256); // reading response
if ($check_header) // checking for header
{
$header_end = strpos($r, "\r\n\r\n"); // this is HTTP header boundary
if ($header_end !== false)
$check_header = false; // We've found it, no more checking
}
}
fclose($fp);
$header_boundary = $header_end+4; // 4 is length of "\r\n\r\n"
return array(substr($r, 0, $header_boundary), substr($r, $header_boundary));
}
// hmac-sha1 code START
// hmac-sha1 function: assuming key is global $aws_secret 40 bytes long
// http://en.wikipedia.org/wiki/HMAC
// warning: key is padded to 64 bytes with 0x0 after first function call
// hmac-sha1 function
function amazon_hmac($stringToSign)
{
if (!function_exists('binsha1'))
{ // helper function binsha1 for amazon_hmac (returns binary value of sha1 hash)
if (version_compare(phpversion(), "5.0.0", ">=")) {
function binsha1($d) { return sha1($d, true); }
} else {
function binsha1($d) { return pack('H*', sha1($d)); }
}
}
global $aws_secret;
if (strlen($aws_secret) == 40)
$aws_secret = $aws_secret.str_repeat(chr(0), 24);
$ipad = str_repeat(chr(0x36), 64);
$opad = str_repeat(chr(0x5c), 64);
$hmac = binsha1(($aws_secret^$opad).binsha1(($aws_secret^$ipad).$stringToSign));
return base64_encode($hmac);
}
// hmac-sha1 code END
?>
I would suggest using the official AWS SDK for PHP, because it has all of the request signing and handling logic implemented for you. Here is an article by one of the SDK's developers that is relevant to what you are doing: Streaming Amazon S3 Objects From a Web Server
Infact if you just need to see the number of downloads, you can achieve this without running yourown server with php.
This info is already available in the S3 bucket logs, if you enable. This will be more accurate, since the in the PHP approach there is no way to track download, if the user take the S3 link directly and share/download.
These logs are little difficult to parse though, but the services like https://qloudstat.com and http://www.s3stat.com/ help here.
Another point: Downloads will be considerably faster, if you enable CDN - Cloudfront in front of the S3 bucket.
I have a PHP app that uploads videos (from little ones - 1MB - to big ones - 400MB).
Everything works fine, except for some particular files.
These files always presents a MD5 checksum error:
WS Error Code: BadDigest, Status Code: 400, AWS Request ID: 89BBC1D79A4492A7, AWS Error Type: client, AWS Error Message: The Content-MD5 you specified did not match what we received.
I verified the MD5 and it really doesn't mach, but I have no idea why!
If it was a corruption error, the returned S3 MD5 would vary, but it's always the same.
In my local machine (a Mac) and on the server (Ubuntu), the MD5 is:
9131ee88a194b555d0a3519f67294f31
In Amazon, it is:
8e6789baf9c5d434003a5443d30143fa
The upload is made with this excerpt of code:
try
{
$start = (float) array_sum(explode(' ',microtime()));
$save_path = "/tmp/$video[quality]/$video[video_id].mp4";
$db_path = "$video[channel]/$video[quality]/$video[video_id].mp4";
$bytes = number_format(filesize($save_path) / 1048576, 2) . ' MB';
System_Daemon::info(($i + 1) . "/$num_of_videos - started upload of $video[channel] - $video[video_id] with $bytes");
$s3 = Aws\S3\S3Client::factory(
array(
'key' => 'MY KEY',
'secret' => 'MY SECRET',
'region' => Region::US_EAST_1
)
);
$results = $s3->putObject(array(
'Bucket' => 'media.tubelivery.com',
'Key' => $db_path,
'Body' => fopen($save_path, 'r'),
'ACL' => Aws\S3\Enum\CannedAcl::PUBLIC_READ
));
//Delete the original file
unlink($save_path);
clearstatcache($save_path);
//Change the video state to 0
update_video_state_to_uploaded_to_S3($video['id']);
$end = (float) array_sum(explode(' ',microtime()));
$time = sprintf("%.4f", ($end - $start)) . " sec";
System_Daemon::info("uploaded video " . ($i + 1) . " to $db_path in $time");
}
catch (Aws\S3\Exception\S3Exception $e)
{
System_Daemon::err("ERROR uploading $video[video_id].mp4 to S3");
foreach($results as $key => $result)
{
System_Daemon::err("$key => $result");
}
$save_path = "/tmp/$video[quality]/$video[video_id].mp4";
clearstatcache($save_path);
System_Daemon::err("ERROR: $e");
}
This is the exact log:
[Mar 13 02:35:53] err: ERROR uploading 4XwKKMlGibo.mp4 to S3 [l:145]
[Mar 13 02:35:53] err: Expiration => [l:148]
[Mar 13 02:35:53] err: ServerSideEncryption => [l:148]
[Mar 13 02:35:53] err: ETag => "7f65e3f892d96b9703d411219e2b868a" [l:148]
[Mar 13 02:35:53] err: VersionId => [l:148]
[Mar 13 02:35:53] err: RequestId => 80821CC621946236 [l:148]
[Mar 13 02:35:53] err: ERROR: Aws\S3\Exception\BadDigestException:
AWS Error Code: BadDigest,
Status Code: 400,
AWS Request ID: 7C4B4834C6235D1A,
AWS Error Type: client,
AWS Error Message: The Content-MD5 you specified did not match what we received. [l:152]
Any ideas? What am I doing wrong?
I was working with a large data transfer with S3 and had the exact same error message. It turned out that the library we were using made use of the multipart upload functionality for files above a certain size. I'm unsure if the PHP SDK does the same thing, but apparently the way that multipart objects are stored within S3 causes a different hash to be generated than you would get when doing it on your local machine.
I saw that this question was very old, but wanted you to know you were not alone!
Ok so far I have been able to show thumbnails from user/album using the google feed. Everything displays ok, except when I want to show the thumbnail image larger. I can't seem to get the large image to show, not sure what to use here..here's my code:
<?php
$user = '100483307985144997386';
$albumid = '5092093264124561713';
$picasaURL = "http://picasaweb.google.com/$user/";
$albumfeedURL = "http://picasaweb.google.com/data/feed/api/user/$user/albumid/$albumid";
$sxml_album = simplexml_load_file($albumfeedURL);
echo '<table cellpadding="3" cellspacing="3">';
echo "<tr>";
$i = 0;
foreach( $sxml_album->entry as $album_photo )
{
//$title = $album_photo->title;
$summary = $album_photo->summary;
// Write thumbnail to file
$media = $album_photo->children('http://search.yahoo.com/mrss/');
$thumbnail = $media->group->thumbnail[1];
$gphoto = $album_photo->children('http://schemas.google.com/photos/2007/');
$linkName = $gphoto->group->attributes()->{'url'};
// Direct address to thumbnail
$thumbAddy = $thumbnail->attributes()->{'url'};
if($i%4==0) { echo '</tr><tr>'; }
echo '<td style="width:90px; overflow:hidden; word-wrap:break-word; font-size:12px;">';
echo '<a class="fancybox-buttons" data-fancybox-group="button" href="'. $linkName . '"><img src="'. $thumbAddy . '" /></a>';
echo '<p>'. $summary . '</p></td>';
$i++;
}
echo '</tr></table>';
the feed/api for each photo contains 3 thumbs and a large picture which are accessible on the native http rest api in the following:
"media$thumbnail":[
{
"url":"https://lh3.googleusercontent.com/-_FFMNGPU1TQ/TtukXyN4eCI/AAAAAAAACso/EzPmut2iKVQ/s72/DSC01612.JPG",
"height":72,
"width":48
},
{
"url":"https://lh3.googleusercontent.com/-_FFMNGPU1TQ/TtukXyN4eCI/AAAAAAAACso/EzPmut2iKVQ/s144/DSC01612.JPG",
"height":144,
"width":96
},
{
"url":"https://lh3.googleusercontent.com/-_FFMNGPU1TQ/TtukXyN4eCI/AAAAAAAACso/EzPmut2iKVQ/s288/DSC01612.JPG",
"height":288,
"width":192
}
],
LARGE ONE:
"media$group":{
"media$content":[
{
"url":"https://lh3.googleusercontent.com/-_FFMNGPU1TQ/TtukXyN4eCI/AAAAAAAACso/EzPmut2iKVQ/DSC01612.JPG",
"height":512,
"width":341,
"type":"image/jpeg",
"medium":"image"
}
similar reference
When coding clients to an underlying REST api it can often help to have a good grasp on the native protocol and what character streams ( request / response ) are on the wire. Then you adapt PHP/Curl to what is there in http protocol.
The google oauth playground is a great tool for testing the back and forth dialogs involved in development against any of the gdata apis ( including picasa )...
playground
here is the playground request code to get the thumbs and the large pic for a given album/photo...
GET //data/entry/api/user/rowntreerob/albumid/5682316071017984417/photoid/5682316083381958690?fields=media%3Agroup%2Fmedia%3Athumbnail%5B%40url%5D%2Cmedia%3Agroup%2Fmedia%3Acontent%5B%40url%5D&alt=json HTTP/1.1
Host: picasaweb.google.com
Authorization: OAuth ya29.AHES6ZT123y3Y5Cy3rILYg4Ah4q....
HTTP/1.1 200 OK
status: 200
gdata-version: 1.0
content-length: 756
x-xss-protection: 1; mode=block
content-location: https://picasaweb.google.com//data/entry/api/user/rowntreerob/albumid/5682316071017984417/photoid/5682316083381958690?fields=media%3Agroup%2Fmedia%3Athumbnail%5B%40url%5D%2Cmedia%3Agroup%2Fmedia%3Acontent%5B%40url%5D&alt=json
x-content-type-options: nosniff
set-cookie: _rtok=a1p2m3PiHFkc; Path=/; Secure; HttpOnly, S=photos_html=sX3EHuLxGEre_OMvR0LTPg; Domain=.google.com; Path=/; Secure; HttpOnly
expires: Wed, 16 May 2012 03:23:51 GMT
vary: Accept, X-GData-Authorization, GData-Version, Cookie
x-google-cache-control: remote-fetch
-content-encoding: gzip
server: GSE
last-modified: Fri, 06 Jan 2012 17:57:33 GMT
via: HTTP/1.1 GWA
cache-control: private, max-age=0, must-revalidate, no-transform
date: Wed, 16 May 2012 03:23:51 GMT
access-control-allow-origin: *
content-type: application/json; charset=UTF-8
x-frame-options: SAMEORIGIN
and the response to the above run thru a prettyprint...
"version":"1.0",
"encoding":"UTF-8",
"entry":{
"xmlns":"http://www.w3.org/2005/Atom",
"xmlns$media":"http://search.yahoo.com/mrss/",
"media$group":{
"media$content":[
{
"url":"https://lh3.googleusercontent.com/-_FFMNGPU1TQ/TtukXyN4eCI/AAAAAAAACso/EzPmut2iKVQ/DSC01612.JPG",
"height":512,
"width":341,
"type":"image/jpeg",
"medium":"image"
}
],
"media$thumbnail":[
{
"url":"https://lh3.googleusercontent.com/-_FFMNGPU1TQ/TtukXyN4eCI/AAAAAAAACso/EzPmut2iKVQ/s72/DSC01612.JPG",
"height":72,
"width":48
},
{
"url":"https://lh3.googleusercontent.com/-_FFMNGPU1TQ/TtukXyN4eCI/AAAAAAAACso/EzPmut2iKVQ/s144/DSC01612.JPG",
"height":144,
"width":96
},
{
"url":"https://lh3.googleusercontent.com/-_FFMNGPU1TQ/TtukXyN4eCI/AAAAAAAACso/EzPmut2iKVQ/s288/DSC01612.JPG",
"height":288,
"width":192
You can specify the size by using imgmax parameter (imgmax=d means original image).
https://developers.google.com/picasa-web/docs/2.0/reference#Parameters
Have you tried a print_r( $album_photo ) to check the exact format of the object and what it contains?
I'm pretty sure that there are a bunch of other parameters you can specify in that API to get access to different sizes of pictures and thumbnails. Check the docs.
I've accessed this API using json-script format a while ago and from memory there are a lot of options you can specify.
I have scoured the entire internet trying to find an answer to this problem. Nobody actually answered the question. For future reference to anyone else reading, or my future self, to get the large image do this:
echo $album_photo->content->attributes()->{'src'};
That was WAAAAYY more complicated than it should have been, and the normal XML user probably would have already known how to do that. :/