The problem is after I upload an object to my publicly accessible Google Cloud bucket, I want to use the created URL immediately for another service. However, I don't see a way to get the mediaUrl that I could then use. All of the properties on the following method that would give me that are private:
$bucket->upload(
fopen($_FILES['file']['tmp_name'], 'r'),
array('name' => $name)
);
I've already tried var_dump-ing the above method to see if any of the public properties would give me the created URL, but it doesn't even have any public properties.
Here's the code I'm using to upload the data:
$storage = new StorageClient([
'keyFilePath' => 'keyfile_json.json'
]);
$bucket = $storage->bucket('bucket');
$name = 'some/name/path/'.$_POST['name'];
$bucket->upload(
fopen($_FILES['file']['tmp_name'], 'r'),
array('name' => $name)
);
The file is uploading, I just can't get the URL of the actual resource that I can then go use in a different API call to a different service.
How can I get the URL of the resource after it is uploaded?
You have two ways to achieve this:
Creating the URL for public objects using the following sintaxis: https://storage.googleapis.com/[BucketName]/[ObjectName]
Where:
[BucketName] = your bucket
[ObjectName]= name of your uploaded object
If you are using AppEngine Standard Environment, there is a method in the API PHP App Engine API: getPublicUrl(string $gs_filename, boolean $use_https) : string
Where:
$gs_filename, string, The Google Cloud Storage filename, in the format: gs://bucket_name/object_name.
$use_https, boolean, If True then return a HTTPS URL. Note that the development server ignores this argument and returns only HTTP URLs.
Here the API documentation.
You need to build the public Link URL yourself for public objects.
The format is simple https://storage.cloud.google.com/BucketName/ObjectName.
Related
how to set appProperties in google Drive api in php?
$file = new Google_Service_Drive_DriveFile();
$file->setName($f->getFilename());
$file->setMimeType(mime_content_type($f->getPathname()));
$file->setParents(array($dest));
$object = new stdClass();
$object->projecto = 'xpto';
$file->setAppProperties($object);
$data = file_get_contents($f->getPathname());
$createdFile = $driveService->files->create($file, array(
'uploadType' => 'multipart',
'data' => $data
));
i try this:
$file->setAppProperties(array(array('projecto' => 'xpto')));
or this:
$file->setAppProperties(array('projecto' => 'xpto'));
create a file, but dont set properties..
I'm not that much familiar with PHP syntax, but make sure that the setAppproperties is a JSON object. It's indicated on the documentation that its a key-pair value that you can set on the application requesting it. It's even formatted as such in the API Explorer.
You can use Drive API to add your own properties to a Drive file. These properties are stored as key/value pairs on the Drive file.
Do note that by default, the appProperties isn't part of the response object once create has been called. You'll have to specify it on the fields call for it to be part of the response.
I'm trying to create a folder within an S3 amazon bucket but I'm finding it difficult to find the right documentation to adequately explain what is needed. I have the following code / pseudocode for creating a folder. Can anyone explain or provide a sample of the arguments I need to place within the code
use vendor\aws\S3\S3Client;
$bucket_url = 'https://***.amazonaws.com/***/';
$folder_name = $username . '/';
$s3Client = new vendor\aws\S3\S3Client([
'version' => AWS_VERSION,
'region' => AWS_REGION,
'credentials' => [
'key' => AWS_KEY,
'secret' =>AWS_SECRET,
],
]);
$s3Client->putObject(array(
'Bucket' => AWS_BUCKET, // Defines name of Bucket
'Key' => AWS_PATH . $folder_name, //Defines Folder name
'Body' => "",
));
S3 doesn't have folders beyond the bucket, but objects (files) can have /s (forward slashes) in their name, and there are methods to retrieve based on a prefix that allows you to emulate a directory-list. This means though, that you can't create an empty folder.
So a work around will be put a empty txt file and delete it after wards but the folder structure will stay.
/* function to upload empty test.txt file to subfolders /folder/ on S3 bucketname */
$s3->putObjectFile(‘test.txt’, ‘bucketname’, ‘/folder/test.txt’, S3::ACL_PUBLIC_READ);
/* function to delete empty test.txt file to subfolders /folder/ on S3 bucketname */
$s3->deleteObject(‘bucketname’, ‘/folder/test.txt’);
Amazon S3 does not have a concept of folders. For S3, all objects are simply a key name with data.
Folders are a human concept which use the '/' character to separate the folders. But S3 does not care.
When you use many third-party tools (and even the AWS Management Console), the tools often will look at the object keys under your prefix and when it sees a '/' in it, it will interpret it as a folder.
But there's no way to "create a folder".
If you you simply PutObject an object with a key with your desired full path (for example, "my/desired/folder/structure/file.txt"), Amazon S3 will put it there. It's not like many filesystems where the folder must exist before a file can be created.
The closest thing to "creating a folder" you could do is to create a 0-byte object with a '/' at the end of it's key. For example "my/desired/folder/structure/". But it will just be another object in the bucket. It won't have any effect on the creation or operation of the bucket or any other objects in the bucket.
Amazon S3 doesn't really have directories:
In Amazon S3, buckets and objects are the primary resources, where objects are stored in buckets. Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects.
Instead, it fakes it based on the name of an object's key. Just upload an object using a key like some/directory/file.txt and many tools, including the S3 interface in the AWS console, will act as if you have an object called file.txt in a directory called directory in a directory called some.
See also Amazon S3 boto - how to create a folder?
$client->putObject([
'Bucket' => 'bucket',
'Key' => 'folder/', ]);
For 'version' => '2006-03-01',
ok, my first SO question, be nice.
I am having problems finding the answer to this question. yes ive tried
the options for the transfer constructor don't mention any ACL options.
my searches on google come up either blank or for version 2.x
this is my code
$options[] = [
'DEBUG' => true,
];
// Where the files will be transferred to
$dest = 's3://newbucket/'.$UUID;
// Create a transfer object.
$manager = new \Aws\S3\Transfer($s3, $path, $dest, $options );
// Perform the transfer synchronously.
$manager->transfer();
$promise = $manager->promise();
$promise->then(function () {
echo 'Done!';
});
everything uploads ok but the files are not public-read
where/how do i set Public-read on the files uploaded in VERSION 3.2
You can add a 'before' closure to the array of options you're passing to the transfer manager that could handle assigning permissions. Try replacing your manager instantiation code with this:
$manager = new \Aws\S3\Transfer($s3, $path, $dest, [
'before' => function (\Aws\CommandInterface $command) {
if (in_array($command->getName(), ['PutObject', 'CreateMultipartUpload'])) {
$command['ACL'] = 'public-read';
}
},
]);
One way you can do it is set the permissions on the Bucket in the console. under permissions of the bucket you want to set. click 'edit bucket policy'
The other peice of info you will need is how to create the JSON file that you paste in. http://awspolicygen.s3.amazonaws.com/policygen.html if you get an error from AWS's tool just reformat it based on what you see in http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
I hope that helps others
I've been experimenting using the new Flysystem integration with Laravel 5. I am storing 'localised' paths to the DB, and getting the Storage facade to complete the path. For example I store screenshots/1.jpg and using
Storage::disk('local')->get('screenshots/1.jpg')
or
Storage::disk('s3')->get('screenshots/1.jpg')
I can retrieve the same file on different disks.
get retrieves the file contents, but I am hoping to use it in my views like this:
<img src="{{ Storage::path('screenshots/1.jpg') }}" />
but path, or anything able to retrieve the full path is not available (as far as I can see). So how can I return the full path? Or, I'm wondering if this is by design? If so, why am I not supposed to be able to get the full path? Or, am I going about this completely the wrong way?
The Path to your Storage disk would be :
$storagePath = Storage::disk('local')->getDriver()->getAdapter()->getPathPrefix()
I don't know any shorter solutions to that...
You could share the $storagePath to your Views and then just call
$storagePath."/myImg.jpg";
This method exists since Laravel 5.4, you can get it by:
$path = Storage::disk('public')->path($filename);
Edit: Solution for L5.2+
There's a better and more straightforward solution.
Use Storage::url($filename) to get the full path/URL of a given file. Note that you need to set S3 as your storage filesystem in config/filesystems.php: 'default' => 's3'
Of course, you can also do Storage::disk('s3')->url($filename) in the same way.
As you can see in config/filesystems.php there's also a parameter 'cloud' => 's3' defined, that refers to the Cloud filesystem. In case you want to mantain the storage folder in the local server but retrieve/store some files in the cloud use Storage::cloud(), which also has the same filesystem methods, i.e. Storage::cloud()->url($filename).
The Laravel documentation doesn't mention this method, but if you want to know more about it you can check its source code here.
This is how I got it to work - switching between s3 and local directory paths with an environment variable, passing the path to all views.
In .env:
APP_FILESYSTEM=local or s3
S3_BUCKET=BucketID
In config/filesystems.php:
'default' => env('APP_FILESYSTEM'),
In app/Providers/AppServiceProvider:
public function boot()
{
view()->share('dynamic_storage', $this->storagePath());
}
protected function storagePath()
{
if (Storage::getDefaultDriver() == 's3') {
return Storage::getDriver()
->getAdapter()
->getClient()
->getObjectUrl(env('S3_BUCKET'), '');
}
return URL::to('/');
}
If you just want to display storage (disk) path use this:
Storage::disk('local')->url('screenshots/1.jpg'); // storage/screenshots/1.jpg
Storage::disk('local')->url(''): // storage
Also, if you are interested, I created a package (https://github.com/fsasvari/laravel-uploadify) just for Laravel so you can use all those fields on Eloquent model fields:
$car = Car::first();
$car->upload_cover_image->url();
$car->upload_cover_image->name();
$car->upload_cover_image->basename();
$car->upload_cover_image->extension();
$car->upload_cover_image->filesize();
If you need absolute URL of the file, use below code:
$file_path = \Storage::url($filename);
$url = asset($file_path);
// Output: http://example.com/storage/filename.jpg
First get file url/link then path, as below:
$url = Storage::disk('public')->url($filename);
$path = public_path($url);
Well, weeks ago I made a very similiar question (Get CDN url from uploaded file via Storage): I wanted the CDN url to show the image in my view (as you are requiring ).
However, after review the package API I confirmed that there is no way do this task. So, my solution was avoid using flysystem. In my case, I needed to play with RackSpace. So, finally decide to create my use package and make my own storage package using The PHP SDK for OpenStack.
By this way, you have full access for functions that you need like getPublicUrl() in order to get the public URL from a cdn container:
/** #var DataObject $file */
$file = \OpenCloud::container('cdn')->getObject('screenshots/1.jpg');
// $url: https://d224d291-8316ade.ssl.cf1.rackcdn.com/screenshots/1.jpg
$url = (string) $file->getPublicUrl(UrlType::SSL);
In conclusion, if need to take storage service to another level, then flysystem is not enough. For local purposes, you can try #nXu's solution
this work for me in 2020 at laravel 7
$image_resize = Image::make($image->getRealPath());
$image_resize->resize(800,600);
$image_resize->save(Storage::disk('episodes')->path('') . $imgname);
so you can use it like this
echo Storage::disk('public')->path('');
Store method:
public function upload($img){
$filename = Carbon::now() . '-' . $img->getClientOriginalName();
return Storage::put($filename, File::get($img)) ? $filename : '';
}
Route:
Route::get('image/{filename}', [
'as' => 'product.image',
'uses' => 'ProductController#getImage',
]);
Controller:
public function getImage($filename)
{
$file = Storage::get($filename);
return new Response($file, 200);
}
View:
<img src="{{ route('product.image', ['filename' => $yourImageName]) }}" alt="your image"/>
Another solution I found is this:
Storage::disk('documents')->getDriver()->getConfig()->get('url')
Will return the url with the base path of the documents Storage
Take a look at this: How to use storage_path() to view an image in laravel 4 . The same applies to Laravel 5:
Storage is for the file system, and the most part of it is not accessible to the web server. The recommended solution is to store the images somewhere in the public folder (which is the document root), in the public/screenshots/ for example.
Then when you want to display them, use asset('screenshots/1.jpg').
In my case, i made separate method for local files, in this file:
src/Illuminate/Filesystem/FilesystemAdapter.php
/**
* Get the local path for the given filename.
* #param $path
* #return string
*/
public function localPath($path)
{
$adapter = $this->driver->getAdapter();
if ($adapter instanceof LocalAdapter) {
return $adapter->getPathPrefix().$path;
} else {
throw new RuntimeException('This driver does not support retrieving local path');
}
}
then, i create pull request to framework, but it still not merged into main core yet:
https://github.com/laravel/framework/pull/13605
May be someone merge this one))
$url = $filename->getMedia('media_name');
I am using the PHP library for Google API Storage. How do I set the acl parameter (to 'public-read' for example) when inserting a storage object, in order to make an object public via its URI?
I have tried this:
$gso = new \Google_Service_Storage_StorageObject();
$gso->setName($folderAndFileName);
$gso->setAcl('public-read');
but the use of setAcl doesn't seem to have any effect.
I'm not sure if there's an easier way, but this should work:
$acl = new Google_Service_Storage_ObjectAccessControl();
$acl->setEntity('allUsers');
$acl->setRole('READER');
$acl->setBucket('<BUCKET-NAME>');
$acl->setObject('<OBJECT-NAME>');
// $storage being a valid Google_Service_Storage instance
$response = $storage->objectAccessControls->insert('<BUCKET-NAME>', '<OBJECT-NAME>', $acl);
You can see all the possible values here.
Also, this requires the https://www.googleapis.com/auth/devstorage.full_control scope when authenticating.
In order to set the access control for an individual request, you must do the following:
In order to make the file public, the role must be set as "OWNER" and entity as "allUsers"
Documentation can be found here:
https://cloud.google.com/storage/docs/access-control#predefined-project-private
$acl = new Google_Service_Storage_ObjectAccessControl();
$acl->setEntity('allUsers');
$acl->setRole('OWNER');
and then you must apply the ACL to storage object as follows:
$storObj = new Google_Service_Storage_StorageObject();
$storObj ->setAcl(array($acl));
The setAcl function requires an array as it parameter, therefore you add your access control object as the only element in an anonymous array