Configuration aws_s3 with flysystem and liip_imagine - php

My symfony 5.4 project uses aws_s3 + flysystem + liip_imagine.
In aws_s3, I have a PRIVATE bucket: "myBucket" with 3 subfolders :
documents
photos
media
And IAM PERMISSION
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:DeleteObject",
"s3:GetObjectAcl",
"s3:PutObjectAcl",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::myBucket",
"arn:aws:s3:::mybucket/*"
]
My setups :
see : https://github.com/liip/LiipImagineBundle/issues/823
Service.yaml
...
parameters:
uploads_base_url: '%env(AWS_S3_UPLOAD_BASE_URL)%'
services :
Aws\S3\S3Client:
arguments:
-
version: '2006-03-01'
region: '%env(AWS_S3_ACCESS_REGION)%'
credentials:
key: '%env(AWS_S3_ACCESS_ID)%'
secret: '%env(AWS_S3_ACCESS_SECRET)%'
oneup_flysystem.yaml
...
adapters:
aws_s3_adapter:
awss3v3:
client : Aws\S3\S3Client
bucket: '%env(AWS_S3_BUCKET_NAME)%'
options:
ACL: bucket-owner-full-control
filesystems:
aws_s3_system:
adapter: aws_s3_adapter
lipp_imagine.yaml
# https://symfony.com/bundles/LiipImagineBundle/current/cache-resolver/aws_s3.html
# I do not understand everything
...
driver: "gd"
loaders:
aws_s3_loader:
flysystem:
filesystem_service: oneup_flysystem.aws_s3_system_filesystem
data_loader: aws_s3_loader
resolvers:
aws_s3_resolver:
flysystem:
filesystem_service: oneup_flysystem.aws_s3_system_filesystem
root_url: '%uploads_base_url%'
cache_prefix: media/cache
cache: aws_s3_resolver
filter_sets:
squared_thumbnail_small:
quality: 70
filters:
thumbnail:
size: [50, 50]
mode: outbound
twig
# call twig_function assetPresigned
<a href="{{ assetPresigned('photos', player.photoFilename) }}" target="_blank">
<img src="{{ assetPresigned('photos', player.photoFilename ) | imagine_filter('squared_thumbnail_small') }}" alt="photo">
</a>
FileUploadService.php
public function assetPresigned(string $folder, string $filename): string
{
$command = $this->s3Client->getCommand('GetObject', [
'Bucket' => $this->awsS3BucketName,
'Key' => $folder.'/'.$filename,
]);
// RETURN PRESIGNED URL
$urlPresigned = $this->s3Client->createPresignedRequest($command, '+5 minutes');
return ((string) $urlPresigned->getUri());
}
pb 1 :
My problem is that the "squared_thumbnail_small" filter rewrites the url removing the pre-signed signature
results in twig :
href: the image appears on the click because url is pre-signed
img: url loses its signature and therefore is not displayed
nb: it is liip_imagine via "imagine_filter('squared_thumbnail_small) which creates the thumbnail in mybucket/media. At this stage, the thumbnail does not yet appear in mybucket/media because because it has not yet been displayed
question :
How to properly configure my code so that the filter does not remove the presigned signature ?
Here is what I tried
FileUploadService.php -
namespace App\Service;
...
public function assetPresigned(string $folder, string $filename): string
{
# call ImagineFilterService.php
$this->imagineFilter->filter($folder.'/'.$filename);
# ...
$command....
}
ImagineFilterService.php
namespace App\Service;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\HttpFoundation\RedirectResponse;
use Liip\ImagineBundle\Imagine\Cache\CacheManager;
use Liip\ImagineBundle\Imagine\Data\DataManager;
use Liip\ImagineBundle\Imagine\Filter\FilterManager;
use App\Service\FileUploadService;
# https://symfony.com/bundles/LiipImagineBundle/current/filters.html#dynamic-filters
# I tried to create a filter dynamically based on this documentation
Class ImagineFilterService
public function filter($path)
{
$filter = 'squared_thumbnail_small';
# if photo is not stored in myBucket/media then you save it by applying the filter?
if (!$this->cacheManager->isStored($path, $filter)) {
$binary = $this->dataManager->find($filter, $path);
$filteredBinary = $this->filterManager->applyFilter($binary, $filter);
# In this method, I pass in parameter the resolver
$this->cacheManager->store($filteredBinary, $path, $filter, 'aws_s3_resolver');
}
// ERROR 403!!!
return $this->cacheManager->resolve($path, $filter);
}
I now recover the thumbnails that were already stored in myBucket/media (ok)
But for saving new thumbnails in myBucket/media I got a 403 error.
AWS HTTP error: Client error: PUT resulted in a 403 Forbidden` response:
Access Denied
So I lose the credentials.
This is complex for me and a precise answer (see a piece of code) would help me a lot. I don't know where I'm wrong. I've been stuck for several days
Thanks for your help.

Related

Liip Imagine generates wrong remove path on listener event in Symfony 5

I'm using Liip Imagine Bundle for Symfony 5. I've got it configured, the images are generated and show properly.
However, I have an issues on an event listener for postUpdate and preRemove where I want to delete the image.
The listener itself works fine, but Liip Imagine generates the wrong path for the storage, and thus I'm stuck with hundreds of images that never get deleted properly.
My config is fairly standard:
parameters:
img_path: "/public/assets/img/uploaded"
display_img_path: "/assets/img/uploaded/"
google_secret_key: '%env(GOOGLE_KEY)%'
site_name: '%env(SITE_NAME)%'
liip_img_cache_path: '/media/cache'
liip_imagine:
resolvers:
default:
web_path:
web_root: "%kernel.project_dir%/public"
cache_prefix: "%liip_img_cache_path%"
filter_sets:
cache: ~
listing_show_thumbnails:
quality: 60
filters:
thumbnail: { size: [ 450 ], mode: outbound, allow_upscale: true }
user_profile_public_thumbnail:
quality: 60
filters:
thumbnail: { size: [ 600 ], mode: outbound, allow_upscale: true }
add_single_thumbnail:
quality: 60
filters:
thumbnail: { size: [ 300 ], mode: outbound, allow_upscale: true }
add_single_thumbnail_carousel:
quality: 80
filters:
thumbnail: { size: [ 800 ], mode: outbound, allow_upscale: true }
And they way the listener is setup is also standard.
class CacheImageListener
{
protected $cacheManager;
protected $parameterBag;
public function __construct(CacheManager $cacheManager, ParameterBagInterface $parameterBag)
{
$this->cacheManager = $cacheManager;
$this->parameterBag = $parameterBag;
}
public function postUpdate(LifecycleEventArgs $args)
{
//... some other stuff here
}
public function preRemove(LifecycleEventArgs $args)
{
$entity = $args->getEntity();
if ($entity instanceof UserImage) {
if($this->cacheManager->isStored($entity->getPath(), 'user_profile_public_thumbnail')) {
$this->cacheManager->remove($entity->getPath(), 'user_profile_public_thumbnail');
}
}
// other functions bellow
The path generated by the fucntion
$this->cacheManager->isStored...
is
C:\Bitnami\wampstack-7.4.9-0\apps\coleg/public/media/cache/user_profile_public_thumbnail/6012e6dbf4cad3416fb609e343470c27778f491dd1debc238fa2d9b3676fae007cc439bcb912feaca006db1bc23a6d17759e.jpeg
The first part is correct(C:\Bitnami\wampstack-7.4.9-0\apps\coleg/public/media/cache/user_profile_public_thumbnail/) but the rest is not.
The folder structures is like so
Basically it's missing the "suffix", I suppose you'd call it. Meaning the assets/img/uploaded which is where I store my original images.
I'm pretty sure I have something misconfigured somewhere but I can't put my finger on it.

Laravel home(~) SCSS paths not resolving correctly

Ive tried a number of solutions and have not found anything that works.
My base directory for all files is localhost/manpower
I put this in my webpack...
let mix = require('laravel-mix');
/*
|--------------------------------------------------------------------------
| Mix Asset Management
|--------------------------------------------------------------------------
|
| Mix provides a clean, fluent API for defining some Webpack build steps
| for your Laravel application. By default, we are compiling the Sass
| file for the application as well as bundling up all the JS files.
|
*/
mix.js('resources/assets/js/app.js', 'public/assets/js')
.sass('resources/assets/sass/app.scss', 'public/assets/css')
.sass('resources/assets/sass/header.scss', 'public/assets/css')
.version();
When I use the ~ for the home, the processing of the CSS just removes the ~
mix.setPublicPath('manpower/public');
Just creates a directory under the existing file structure and puts the files in that. Unfortunately this popular solution does not change the output of the final CSS.
No mater what I try, I can't get it to add 'manpower' to the URL's I put in the SCCS!
Posting my config to help you, I think that you forgot to define the alias, I have my config in two files to help the IDE (PHPStorm) recognize the ~
(solution from this answer https://stackoverflow.com/a/50159420/5458355)
// webpack.config.js
const path = require('path')
const webpack = require('webpack')
const LodashModuleReplacementPlugin = require('lodash-webpack-plugin')
module.exports = {
resolve: {
extensions: ['.js', '.json', '.vue'],
alias: {
'~': path.resolve(__dirname, './resources/assets/js')
}
},
output: {
path: process.env.MIX_APP_URL, <--- var from .env
publicPath: '/Socse.GST/public/',
chunkFilename: 'dist/js/chunk.[name].js'
}
}
// webpack.mix.js
const path = require('path')
const mix = require('laravel-mix')
const webpack = require('webpack')
mix
.js('resources/assets/js/app.js', 'public/dist/js')
.sass('resources/assets/sass/app.scss', 'public/dist/css')
.styles([
'resources/assets/sass/fa/css/fontawesome-all.css',
'public/css/awesome-bootstrap-checkbox.css',
'public/css/inspinia.css',
'node_modules/vue-multiselect/dist/vue-multiselect.min.css'
], 'public/dist/css/all.css')
.sourceMaps()
//.disableNotifications()
if (mix.inProduction()) {
mix.version()
}
mix.webpackConfig({
plugins: [
new webpack.ProvidePlugin({
$: 'jquery',
jQuery: 'jquery',
'window.jQuery': 'jquery',
Popper: ['popper.js', 'default']
})
],
resolve: {
extensions: ['.js', '.json', '.vue'],
alias: {
'~': path.join(__dirname, './resources/assets/js')
}
},
output: {
chunkFilename: 'dist/js/chunk.[name].[chunkhash].js',
path: process.env.MIX_APP_URL,
publicPath: '/wms/public/'
}
})
New version
const path = require('path');
const mix = require('laravel-mix');
const webpack = require('webpack');
mix
.js('resources/assets/js/app.js', 'assets/js')
.sass('resources/assets/sass/app.scss', 'assets/css')
.sass('resources/assets/sass/header.scss', 'assets/css')
.version()
.options({
processCssUrls: true
});
if (mix.inProduction()) {
mix.version()
}
mix.webpackConfig({
resolve: {
alias: {
'~': path.join(__dirname, './resources/')
}
},
output: {
chunkFilename: 'dist/js/chunk.[name].[chunkhash].js',
path: process.env.MIX_APP_URL,
publicPath: '/public/'
}
})
Im pretty sure this is a problem with the webpack.config.js
Ive been able to resolve some of the issues but the error remains.
/**
* As our first step, we'll pull in the user's webpack.mix.js
* file. Based on what the user requests in that file,
* a generic config object will be constructed for us.
*/
let mix = require('../index');
let ComponentFactory = require('../components/ComponentFactory');
new ComponentFactory().installAll();
require(Mix.paths.mix());
/**
* Just in case the user needs to hook into this point
* in the build process, we'll make an announcement.
*/
Mix.dispatch('init', Mix);
/**
* Now that we know which build tasks are required by the
* user, we can dynamically create a configuration object
* for Webpack. And that's all there is to it. Simple!
*/
let WebpackConfig = require('../src/builder/WebpackConfig');
//module.exports = new WebpackConfig().build();
const path = require('path')
const webpack = require('webpack')
module.exports = {
resolve: {
extensions: ['.js', '.json', '.vue'],
alias: {
'~': path.resolve(__dirname, './resources/assets/js')
},
},
output: {
path: process.env.MIX_APP_URL,
publicPath: '/manpower/public/',
chunkFilename: 'dist/js/chunk.[name].js'
}
}
You may want to provide your version of mix and sass-loader (there is a bug in old sass-loader that does not work with tilde). Also, please do not post comment as update, it causes confusions in your question.
Generally, avoid using tilde; instead, use relative path. You want to look at this answer: https://stackoverflow.com/a/33972875/2188545
Even without all the confusing comments, tilde is confusing when use in url, especially when your browser path manpower/ is not a simple root / path. Don't get me wrong, tilde is fine with css import as it is generally resolve during compilation; just more confusing when use in url. Also, please use quote around your css url.

Sonata Media - file extension validation in admin

I just want to validate the extension of image that I upload in Sonata Admin (v3.28.0) with Sonata Media Bundle (v3.10.0) in Symfony (v2.8.32) application.
I've read all similar questions and Sonata documentations, but still don't have a success.
I tried to add constraints to config.yml
sonata_media:
providers:
image:
allowed_extensions:
- 'jpg'
- 'png'
I wonder that it doesn't work as is, because standard FileProvider (that is extended by ImageProvider) has extension check in validate method. But the method is not being called.
So I also tried to create custom provider:
services.yml:
sonata.media.provider.custom:
class: Application\Sonata\MediaBundle\Provider\CustomImageProvider
tags:
- { name: sonata.media.provider }
arguments:
- sonata.media.provider.custom
- #sonata.media.filesystem.local
- #sonata.media.cdn.server
- #sonata.media.generator.default
- #sonata.media.thumbnail.format
- ['jpg', 'png']
- ['image/pjpeg', 'image/jpeg', 'image/png', 'image/x-png']
- #sonata.media.adapter.image.imagick
- #sonata.media.metadata.proxy
calls:
- [ setTemplates, [{helper_view:SonataMediaBundle:Provider:view_image.html.twig,helper_thumbnail:SonataMediaBundle:Provider:thumbnail.html.twig}]]
Application\Sonata\MediaBundle\Provider\CustomImageProvider.php:
<?php
namespace Application\Sonata\MediaBundle\Provider;
use Sonata\CoreBundle\Validator\ErrorElement;
use Sonata\MediaBundle\Model\MediaInterface;
use Sonata\MediaBundle\Provider\ImageProvider;
class CustomImageProvider extends ImageProvider
{
public function validate(ErrorElement $errorElement, MediaInterface $media)
{
throw new \Exception();
}
}
config.yml:
sonata_media:
contexts:
image:
providers:
- sonata.media.provider.custom
formats:
small: { width: 100 , quality: 70}
big: { width: 500 , quality: 70}
But the validate method is still not being called.
So when I try to load GIF image I get an error:
Length of either side cannot be 0 or negative, current size is x
Do I miss something?
UPDATE
Simple validation can be added right in SomeEntityAdmin class like this:
public function validate(ErrorElement $errorElement, $object)
{
/** #var Media $image */
$image = $object->getImage();
if (!in_array($video->getContentType(), ['image/pjpeg', 'image/jpeg', 'image/png', 'image/x-png'])) {
$errorElement
->with('image')
->addViolation('Invalid file type')
->end()
;
};
}
But it's not a good solution if you want to validate a batch of uploaded images.

Deleting Thumbnails from media/cache in liip imagine bundle

I have the bundle installed and configured with Sonata Admin Bundle, when I try to remove an Image, the image is properly deleted from the folder but not the thumbnail stored in media/cache.
this is my liip_imagine yml:
liip_imagine:
loaders:
loader_s3_thumbnail:
stream:
wrapper: gaufrette://questions_image_fs/
filter_sets:
question_thumb:
cache: default
data_loader: loader_s3_thumbnail
# list of transformations to apply (the "filters")
filters:
thumbnail: { size: [120, 120], mode: outbound }
provider_thumb:
cache: default
data_loader: loader_s3_thumbnail
# list of transformations to apply (the "filters")
filters:
thumbnail: { size: [200, 200], mode: inset }
Any Idea why or how to delete this thumbnails?
Workmate managed to solve it using Liip cachemanager. Here is the code:
Service:
question.admin_bundle.event_listener.delete_thumbnails:
class: QuestionAdminBundle\EventListener\DeleteThumbnails
arguments: [ "#liip_imagine.cache.manager" ]
tags:
- { name: kernel.event_listener, event: vich_uploader.pre_remove, method: postRemove}
Php:
use Liip\ImagineBundle\Imagine\Cache\CacheManager;
[...]
public function __construct(CacheManager $cacheManager)
{
Add a comment to this line
$this->cacheManager = $cacheManager;
}
[...]
public function postRemove(Event $event)
{
$image = $event->getObject();
if ($image instanceof Image){
$this->cacheManager->remove($image->getName());
}
}

Gaufrette and Symfony 2: There is no filesystem defined for the "images" domain

I'm using Symfony3 with the KnpGaufretteBundle to connect to an Amazon S3 bucket with the AWS S3 method outlined on their Github Readme
aws_s3_adapter:
key: "%aws_key%"
secret_key: "%aws_secret%"
region: "%aws_region%"
knp_gaufrette:
adapters:
images:
aws_s3:
service_id: 'aws_s3_adapter.client'
bucket_name: '%aws_bucket%'
options:
directory: 'images'
filesystems:
images:
adapter: images
alias: images_fs
I also have a service defined that I want to use to manage this filesystem (and others) with.
Definition:
services:
test.image_manager:
class: TestBundle\Filesystem\FileManager
arguments:
filesystem: "#images_fs"
filesystem_name: "images"
mimetypes: ["image/jpeg", "image/png", "image/gif"]
Class:
<?php
namespace TestBundle\Filesystem;
use Symfony\Component\HttpFoundation\File\UploadedFile;
use Symfony\Component\HttpFoundation\BinaryFileResponse;
use Gaufrette\Filesystem;
use Gaufrette\StreamWrapper;
class FileManager
{
private $allowedMimeTypes;
private $filesystem;
private $filsystem_name;
public function __construct(Filesystem $filesystem, $filesystem_name, $mimetypes = array())
{
$this->filesystem = $filesystem;
$this->filesystem_name = $filesystem_name;
$this->allowedMimeTypes = $mimetypes;
}
public function upload(UploadedFile $file, $filename)
{
// Check if the file's mime type is in the list of allowed mime types.
if (!in_array($file->getClientMimeType(), $this->allowedMimeTypes)) {
throw new \InvalidArgumentException(sprintf('Files of type %s are not allowed.', $file->getClientMimeType()));
}
$adapter = $this->filesystem->getAdapter();
$adapter->setMetadata($filename, array('contentType' => $file->getClientMimeType()));
return $adapter->write($filename, file_get_contents($file->getPathname()));
}
public function fetch( $filename )
{
if( ! $this->filesystem->has( $filename ) )
{
return false;
}
/* -- PROBLEM -- */
StreamWrapper::register();
return new BinaryFileResponse( "gaufrette://{$this->filesystem_name}/{$filename}" );
/* -- PROBLEM -- */
}
public function delete( $filename )
{
if( ! $this->filesystem->has( $filename ) )
{
return false;
}
return $this->filesystem->delete( $filename );
}
}
I'm able to upload successfully to the bucket using the upload function, telling me that the filesystem exists and is working properly.
I am not, however, able to use the Gaufrette\StreamWrapper to serve the file using a BinaryFileResponse as it says I should do. Instead it is giving me the error that I put in the title: There is no filesystem defined for the "images" domain.
The filesystem definitely exists, as I'm using it to upload the images. Any clues as to what the problem might be that's preventing me from using that filesystem would be very helpful. The Gaufrette documentation is really sparse online so far as I've found, but I'm going to keep digging.
Looking at MainConfiguration.php showed that there's a steam_wrapper option in the configuration for the bundle. I added this into my config.yml under where the filesystems are defined like so:
knp_gaufrette:
adapters:
images:
aws_s3:
service_id: 'aws_s3_adapter.client'
bucket_name: '%aws_bucket%'
options:
directory: 'images'
.
.
.
filesystems:
images:
adapter: images
alias: images_fs
.
.
.
stream_wrapper:
filesystems: [ images, ... ]
and the above code now works.

Categories