So I've been playing around with heroku and I really like it. it's fast and it just works. However i have encountered a problem with my gallery app: https://miko-gallery.herokuapp.com . Create a free account , create an album and try uploading a photo. It will not display. I have run the php artisan storage:link command, but it does not work. What am i missing here?
EDIT
I've just tried a new thing, I tried running heroku run bash and i cd'ed into storage/app/public folder, and it does not contain the folder images which was supposed to be there.
My code for saving the photo is here (works on localhost):
public function store(Request $request)
{
$ext = $request->file('items')->getClientOriginalExtension();
$filename = str_random(32).'.'.$ext;
$file = $request->file('items');
$path = Storage::disk('local')->putFileAs('public/images/photos', $file, $filename);
$photo = new Photo();
$photo->album_id = $request->album_id;
$photo->caption = $request->caption;
$photo->extension = $request->file('items')->getClientOriginalExtension();
$photo->path = $path.'.'.$photo->extension;
$photo->mime = $request->file('items')->getMimeType();
$photo->file_name = $filename;
$photo->save();
return response()->json($photo, 200);
}
Heroku's filesystem is dyno-local and ephemeral. Any changes you make to it will be lost the next time each dyno restarts. This happens frequently (at least once per day).
As a result, you can't store uploads on the local filesystem. Heroku's official recommendation is to use something like Amazon S3 to store uploads. Laravel supports this out of the box:
Laravel provides a powerful filesystem abstraction thanks to the wonderful Flysystem PHP package by Frank de Jonge. The Laravel Flysystem integration provides simple to use drivers for working with local filesystems, Amazon S3, and Rackspace Cloud Storage. Even better, it's amazingly simple to switch between these storage options as the API remains the same for each system.
Simply add league/flysystem-aws-s3-v3 ~1.0 to your dependencies and then configure it in config/filesystems.php.
if you don't have ssh access then simply create a route.so you can hit this command simply by hitting url
Route::get('/artisan/storage', function() {
$command = 'storage:link';
$result = Artisan::call($command);
return Artisan::output();
})
firstly unlink existing link from storage
Related
After reviewing the PHP Docs for the GCP Storage API and the Bookshelf tutorial (https://cloud.google.com/php/getting-started/using-cloud-storage) I'm lost on how to list the files located in a Bucket subdirectory.
I've viewed Listing files in Google Cloud Storage (nearline) - missing files, however, this code is adapted to Python. If it really is as simple as using an ls command, how would I run this command from PHP? I've dug through the repo's on Github from Google and I'm not sure which to use in this case.
I have both of these libraries included via composer. Just to clarify I'm running these remotely from a DigitalOcean Droplet, not from App Engine.
"google/appengine-php-sdk": "^1.9",
"google/cloud": "^0.39.2",
There's an "objects" method that'll do this for you.
use Google\Cloud\Storage\StorageClient;
$storage = new StorageClient();
$bucket = $storage->bucket('my-bucket');
$objects = $bucket->objects([
'fields' => 'items/name,nextPageToken'
]);
foreach ($objects as $object) {
echo $object->name() . PHP_EOL;
}
The documentation for the PHP storage client is over here: https://googlecloudplatform.github.io/google-cloud-php/#/docs/google-cloud/v0.39.2/storage/storageclient
I am trying to copy a folder which is already on s3 and save it with different name on S3 in laravel 5.4 . What I have found so far is that I can copy an Image, Not folder. I have tried to copy folder like ie:
$disk->copy("admin/form/$old_form","admin/form/$new_form");
But it doesnot work like that. It give me an error. Do i need to apply loop and get each folder item separately? Like:
$images = $disk->allFiles('admin/form/$id');
Or is there any work around available in laravel or s3 api it self?
Please help, Its driving me crazy.
Thanks in advance.
I'm in the middle of doing this same thing. Based on what I've read so far, copying a directory itself using Laravel doesn't seem to be possible. The suggestions I've seen so far suggest looking through and copying each image, however I'm not at all satisfied with the speed (since I'm doing this on lots of images several times a day).
Note that I'm only using the Filesystem directly like this so I can more easily access the methods in PHP Storm. $s3 = \Storage::disk('s3'); would accomplish the same thing as my first two lines. I'll update this answer if I find anything that works more quickly.
$filesystem = new FilesystemManager(app());
$s3 = $filesystem->disk('s3');
$images = $s3->allFiles('old-folder');
$s3->deleteDirectory('new_folder'); // If the file already exists, it will throw an exception. In my case I'm deleting the entire folder to simplify things.
foreach($images as $image)
{
$new_loc = str_replace('old-folder', 'new-folder', $image);
$s3->copy($image, $new_loc);
}
Another option:
$files = Storage::disk('s3')->allFiles("old/location");
foreach($files as $file){
$copied_file = str_replace("old/location", "new/location", $file);
if(!Storage::disk('s3')->exists($copied_file)) Storage::disk('s3')->copy($file, $copied_file);
}
This can all be done from CLI:
Install and configure s3cmd:
sudo apt install s3cmd
s3cmd --configure
Then you can:
s3cmd sync --delete-removed path/to/folder s3://bucket-name/path/to/folder/
To make the files and folder public:
s3cmd setacl s3://bucket-name/path/to/folder/ --acl-public --recursive
Further reading: https://s3tools.org/s3cmd-sync
I've found a faster way to do this is by utilizing aws command line tools, specifically the aws s3 sync command.
Once installed on your system, you can invoke from within your Laravel project using shell_exec - example:
$source = 's3://abc';
$destination = 's3://xyz';
shell_exec('aws s3 sync ' . $source . ' ' . $destination);
If your set your AWS_KEY and AWS_SECRET in your .env file, the aws command will refer to these values when invoked from within Laravel.
We have an PHP app, where for encryption of the connection with database we need to provide 3 files that shouldn't be publicly accessible, but should be present on the server to make the DB connection (https://www.cleardb.com/developers/ssl_connections)
Obviously we don't want to store them in the SCM with the app code, so the only idea that comes to my mind is using post-deploy action hook and fetch those files from storage account (with keys and URIs provided in the app parameters).
Is there a nicer/cleaner way to achieve this? :)
Thank you,
You can try to use Custom Deployment Script to execute additional scripts or command during the deployment task. So you can create a php script whose functionality is to download the certificate files from Blob Storage to server file system location. And then in your PHP application, the DB connection can use these files.
Following are the general steps:
Enable composer extension in your portal:
Install azure-cli module via npm, refer to https://learn.microsoft.com/en-us/azure/xplat-cli-install for more info.
Create deployment script for php via command azure site deplotmentscript --php
Execute command composer require microsoft/windowsazure, make sure you have a composer.json with the storage sdk dependency.
Create php script in your root directory to download flies from Blob Storage(e.g. named run.php):
require_once 'vendor/autoload.php';
use WindowsAzure\Common\ServicesBuilder;
use MicrosoftAzure\Storage\Common\ServiceException;
$connectionString = "<connection_string>";
$blobRestProxy = ServicesBuilder::getInstance()->createBlobService($connectionString);
$container = 'certificate';
$blobs = ['client-key.pem','client-cert.pem','cleardb-ca.pem'];
foreach($blobs as $k => $b){
$blobresult = $blobRestProxy->getBlob($container, $b);
$source = stream_get_contents($blobresult->getContentStream());
$result = file_put_contents($b, $source);
}
Modify the deploy.cmd script, add santence php run.php under the step KuduSync.
Deploy your application to Azure Web App via Git.
Any further concern, please feel free to let me know.
I am now using the old Windows Azure SDK for PHP (2011).
require "../../Microsoft/WindowsAzure/Storage/Blob.php";
$accountName = 'resources';
$accountKey = 'accountkey';
$storageClient = new Microsoft_WindowsAzure_Storage_Blob('blob.core.windows.net',$accountName,$accountKey);
$storageClient->blobExists('resources','blob.jpg');
I want to upgrade to the new SDK but don't know how to install it manually.
The website is running on an "Azure Webapp" via FTP.
I managed to find that I need to copy the "WindowsAzure" folder, and reference the classes I need, but how?
I followed the instructions on this page https://azure.microsoft.com/nl-nl/documentation/articles/php-download-sdk/ but got stuck after that.
edit: I have no experience with pear or composer, I want to manually install (upload) it via ftp.
EDIT:
Now I've got the following:
require_once('WindowsAzure/WindowsAzure.php');
use WindowsAzure\Common\ServicesBuilder;
use WindowsAzure\Common\ServiceException;
// Create blob REST proxy.
$connectionString='DefaultEndpointsProtocol=http;AccountName=ttresources;AccountKey=***************';
$blobRestProxy = ServicesBuilder::getInstance()->createBlobService($connectionString);
$container = '87d57f73-a327-4344-91a4-27848d319a66';
$blob = '8C6CBCEB-F962-4939-AD9B-818C124AD3D9.mp4';
$blobexists = $blobRestProxy->blobExists($container,$blob);
echo $blobexists;
echo 'test';
The line $blobRestProxy = ServicesBuilder::getInstance()->createBlobService($connectionString); is blocking everything that happens after it.
After including the HTTP/Request2.php, the page told me to place the Net/Url2 and the PEAR package in the root of my project (the same dir as where I have the page that loads the windowsazure.php), they can be found at:
http://pear.php.net/package/Net_URL2/redirected
http://pear.php.net/package/PEAR/download
You can get the lasted Azure SDK for PHP on GitHub repository at https://github.com/Azure/azure-sdk-for-php. Then upload the WindowsAzure folder to your Azure Web Apps via FTP.
And include the SDK in PHP scripts like:
require_once 'WindowsAzure/WindowsAzure.php';
As in new Azure SDK for PHP, it requires some other dependencies. As your script is blocked at $blobRestProxy = ServicesBuilder::getInstance()->createBlobService($connectionString); you may check whether you have and dependency HTTP/Request2. New SDK leverages it to handle http requests.
You may require this dependency first. Also, you can enable display_errors for troubleshooting.
And in new SDK, it seems there is no function blobExists any more. In stead, you may try :
// Get blob.
$blob = $blobRestProxy->getBlob("mycontainer", "myblob");
if ($blob) {
//blob exists
}
i just can't get this code to work. I'm getting an image from the URL and storing it in a temporary folder so I can upload it to a bucket on Google Cloud Storage. This is a CodeIgniter project. This function is within a controller and is able to get the image and store it in the project root's 'tmp/entries' folder.
Am I missing something? The file just doesn't upload to Google Cloud Storage. I went to the Blobstore Viewer in my local App Engine dev server and notice that there is a file but, it's empty. I want to be able to upload this file to Google's servers from my dev server as well. The dev server seems to overwrite this option and save all files locally. Please help.
public function get()
{
$filenameIn = 'http://upload.wikimedia.org/wikipedia/commons/1/16/HDRI_Sample_Scene_Balls_(JPEG-HDR).jpg';
$filenameOut = FCPATH . '/tmp/entries/1.jpg';
$contentOrFalseOnFailure = file_get_contents($filenameIn);
$byteCountOrFalseOnFailure = file_put_contents($filenameOut, $contentOrFalseOnFailure);
$options = [ "gs" => [ "Content-Type" => "image/jpeg" ]];
$ctx = stream_context_create($options);
file_put_contents("gs://my-storage/entries/2.jpg", $file, 0, $ctx);
echo 'Saved the Image';
}
As you noticed, the dev app server emulates Cloud Storage locally. So, this is the intended behaviour-- and it lets you test without modifying your production storage.
If you run the deployed app you should see the writes actually going to your GCS bucket.