CKFinder3 dynamically change folder path - php

Is there anyway to change the baseUrl of CKFinder dynamically?
I need to use this kind of path: /websitebuilder/www/user_images/$id/. I used google to find some answer, but I didn't manage to make it works.
Can someone please give me any hint how should I do that?
I know that in config.php you change the baseUrl param, but how to make it dinamically?

Hi you can use the example for different folder per instance CKFinder 3 HOWTO.
Basically you should update you config.php to something like this:
$id = getID();
$config['backends'][] = array(
'name' => 'default',
'adapter' => 'local',
'baseUrl' => 'http://example.com/ckfinder/userfiles/' . $id,
'root' => '/path/to/ckfinder/userfiles/' . $id
);

1 create new middleware:
php artisan make:middleware DynamicCkfinderConfig
with this code in the handel function :
not forget to 'use Auth' in the top of file;
public function handle(Request $request, Closure $next)
{
if (auth()->check()) {
config([
'ckfinder.backends.default' => [
'name' => 'default',
'adapter' => 'local',
'baseUrl' => '/user-' . md5(Auth::user()->id) . '/',
'root' => public_path('/user-' . md5(Auth::user()->id) . '/'),
'chmodFiles' => 0777,
'chmodFolders' => 0755,
'filesystemEncoding' => 'UTF-8'
]
]);
}
return $next($request);
}
2 add the middleware in the kernel.php file
protected $routeMiddleware = [
'ckfinderConfig' => \App\Http\Middleware\DynamicCkfinderConfig::class,
];
3 use the middleware in the ckfinder connector route
Route::any('/ckfinder/connector', [CKFinderController::class, 'requestAction'])
->name('ckfinder_connector')->middleware(['ckfinderConfig']);

Related

Disk [public] does not have a configured driver. in laravel image upload

I am trying to upload a file to a public folder which was working lately but now it is showing below error:
Disk [public] does not have a configured driver.
I tried checking for configured driver in config/filesystems.php but, it is already set there. I am not getting where the issue might be.
Upload code:
public function upload(ProductImageRequest $request, Product $product)
{
$image = $request->file('file');
$dbPath = $image->storePublicly('uploads/catalog/'.$product->id, 'public');
if ($product->images === null || $product->images->count() === 0) {
$imageModel = $product->images()->create(
['path' => $dbPath,
'is_main_image' => 1, ]
);
} else {
$imageModel = $product->images()->create(['path' => $dbPath]);
}
return response()->json(['image' => $imageModel]);
}
Code in config/filesystems.php
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
],
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
i use this code for moving the picture and storing its name you may want to give it a shot
//get icon path and moving it
$iconName = time().'.'.request()->icon->getClientOriginalExtension();
$icon_path = '/category/icon/'.$iconName;
request()->icon->move(public_path('/category/icon/'), $iconName);
$category->icon = $icon_path;
i usually move the image then store its path in db and this is what my code shows you can edit it as desired

Slim v4 Creating Log File

So, my issue is that I'm having trouble to make Slim record all its actions inside a file (for example : app.log). I ran across a lot of tutorials and other forum similar to this one but the issue was that they were using the v3 of the Slim Framework.
I saw some post suggesting things like this inside a settings.php :
return [
'settings' => [
'displayErrorDetails' => true, // set to false in production
'addContentLengthHeader' => false, // Allow the web server to send the content-length header
// Renderer settings
'renderer' => [
'template_path' => __DIR__ . '/../templates/',
],
// Monolog settings
'logger' => [
'name' => 'my-app',
'path' => __DIR__ . '/../logs/' . $logDate->format('Y-m-d') . 'app.log',
],
],
];
But the issue with that method is that, well, settings aren't set up this way anymore in v4. So here I am. Stuck. If anybody could give me a hand, it'll help a lot !
To load the settings within a container you have to add a container definition for it.
Example settings file: config/settings.php
return [
// set to false in production
'displayErrorDetails' => true,
// Renderer settings
'renderer' => [
'template_path' => __DIR__ . '/../templates/',
],
// Monolog settings
'logger' => [
'name' => 'my-app',
'path' => __DIR__ . '/../logs/' . date('Y-m-d') . '_app.log',
],
];
Example container entry in config/container.php:
use Psr\Container\ContainerInterface;
// ...
return [
// ...
'settings' => function () {
return require __DIR__ . '/settings.php';
},
// Add more entries here ...
}
To fetch the settings within the container use this:
use Psr\Container\ContainerInterface;
use Psr\Log\LoggerInterface;
// ...
return [
// ...
LoggerInterface::class => function (ContainerInterface $container) {
$settings = $container->get('settings');
$name = $settings['logger']['name'];
$logger = new Logger($name);
// Add logger handler...
return $logger;
},
// ...
}
Tip: For autowiring support it's better to use an collection object instead of an simple "string" as container identifier. Read more

Escaping a PHP function as an Array value

I'm working on a Drupal 8 starter kit with Composer, similar to drupal-composer/drupal-project.
In my post-install script, I want to re-generate a settings.php file with my custom values.
I've seen that can be done with the drupal_rewrite_settings function.
For example, I'm rewriting the config_sync_directory value like that :
require_once $drupalRoot . '/core/includes/bootstrap.inc';
require_once $drupalRoot . '/core/includes/install.inc';
new Settings([]);
$settings['settings']['config_sync_directory'] = (object) [
'value' => '../config/sync',
'required' => TRUE,
];
drupal_rewrite_settings($settings, $drupalRoot . '/sites/default/settings.php');
Problem is I want my Drupal 8 project to have a Dotenv so the maintainers don't have to modify the settings.php but only a .env file in the root folder of the project. To make it work, my settings.php must look like this :
$databases['default']['default'] = [
'database' => getenv('MYSQL_DATABASE'),
'driver' => 'mysql',
'host' => getenv('MYSQL_HOSTNAME'),
'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
'password' => getenv('MYSQL_PASSWORD'),
'port' => '',
'prefix' => '',
'username' => getenv('MYSQL_USER'),
];
$settings['trusted_host_patterns'] = explode(',', '^'.getenv('SITE_URL').'$');
As you can see, the values are replaced by PHP functions, and I can't see a good way to print those values, to the point I'm not even sure that's possible.
So my question is : is it possible to escape a PHP function as an Array value when declaring this variable ?
Looks like it's not possible because of the way the Drupal function works.
Solution 1 by #misorude
Using the drupal_rewrite_settings function, we can add the value of settings as a String, like this :
$settings['settings']['trusted_host_patterns'] = (object) [
'value' => "FUNC[explode(',', '^'.getenv('SITE_URL').'$')]",
'required' => TRUE,
];
And after that, we can replace all occurrences of "FUNC[***]" by *** directly in the settings.php file.
Solution 2
Put all your settings in a separate file. Example here, a custom.settings.php file :
if (getenv('DEBUG') == 'true') {
$settings['container_yamls'][] = DRUPAL_ROOT . '/sites/dev.services.yml';
$config['system.performance']['css']['preprocess'] = FALSE;
$config['system.performance']['js']['preprocess'] = FALSE;
}
$databases['default']['default'] = [
'database' => getenv('MYSQL_DATABASE'),
'driver' => 'mysql',
'host' => getenv('MYSQL_HOSTNAME'),
'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
'password' => getenv('MYSQL_PASSWORD'),
'port' => '',
'prefix' => '',
'username' => getenv('MYSQL_USER'),
];
$settings['trusted_host_patterns'] = explode(',', '^'.getenv('SITE_URL').'$');
$settings['file_private_path'] = 'sites/default/files/private';
$settings['config_sync_directory'] = '../config/sync';
Then we can copy the default.settings.php and add our custom settings.
$fs = new Filesystem();
$settings_generated = $drupalRoot . '/sites/default/settings.php';
$settings_default = $drupalRoot . '/sites/default/default.settings.php';
$settings_custom = $drupalRoot . '/../includes/custom.settings.php';
$fs->remove($settings_generated);
$fs->dumpFile($settings_generated, file_get_contents($settings_default) . file_get_contents($settings_custom));
There's also a appendToFile method that seems way better than dumping a new file with dumpFile, but it was not working unfortunatly.

How to set User ID to Storage path in Laravel?

I'm building an Restful API using Laravel 5 and MongoDB.
I'm saving avatar image for users.
It's working fine but I'm trying to create a Folder for every User. For example: "app/players/images/USERID"
I've tried to do something like this in different ways but I always get Driver [] is not supported.
\Storage::disk('players'.$user->id)->put($image_name, \File::get($image));
UploadImage:
public function uploadImage(Request $request)
{
$token = $request->header('Authorization');
$jwtAuth = new \JwtAuth();
$user = $jwtAuth->checkToken($token, true);
$image = $request->file('file0');
$validate = \Validator::make($request->all(), [
'file0' => 'required|image|mimes:jpg,jpeg,png'
]);
if ( !$image || $validate->fails() )
{
$data = array(
'code' => 400,
'status' => 'error',
'message' => 'Image uploading error-'
);
}
else
{
$image_name = time().$image->getClientOriginalName();
\Storage::disk('players')->put($image_name, \File::get($image));
$user_update = User::where('_id', $user->id)->update(['imagen' => $image_name]);
$data = array(
'code' => 200,
'status' => 'success',
'user' => $user->id,
'imagen' => $image_name
);
}
return response()->json($data, $data['code']);
}
filesystems.php:
'players' => [
'driver' => 'local',
'root' => storage_path('app/players/images/'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
],
I expect the user avatar image saves on User ID folder.
The disk call, tells Laravel which filesystem to use, let's assume you have an user with Id one, with your code it will access the filesystem playeers1.
What usually is done is to put these files in folder structures for the different users, so instead you could do. This will put your image file, in the folder 1.
\Storage::disk('players')->put($user->id . '/' . $image_name, \File::get($image));
I had a similar problem, check if the lines can change what you want to achieve.
\Storage::disk('players')->put("{$user->id}/{$image_name}", \File::get($image));
I relied on the laravel guide: File Storage - File Uploads
I hope it helps you. A cordial greeting.

Add Metadata, headers (Expires, CacheControl) to a file uploaded to Amazon S3 using the Laravel 5.0 Storage facade

I am trying to find out how to add in Metadata or headers (Expires, CacheControl etc.) to a file uploaded using the Laravel 5.0 Storage facade. I have use the page here as reference.
http://laravel.com/docs/5.0/filesystem
The following code works correctly:
Storage::disk('s3')->put('/test.txt', 'test');
After digging I also found that there is a 'visibility' parameter which sets the ACL to 'public-read' so the following also works correctly.
Storage::disk('s3')->put('/test.txt', 'test', 'public');
But I would like to be able to set some other values to the header of the file. I have tried the following:
Storage::disk('s3')->put('/index4.txt', 'test', 'public', array('Expires'=>'Expires, Fri, 30 Oct 1998 14:19:41 GMT'));
Which doesn't work, I have also tried:
Storage::disk('s3')->put('/index4.txt', 'test', array('ACL'=>'public-read'));
But that creates an error where the 'visibility' parameter can not be converted from a string to an array. I have checked the source of AwsS3Adapter and it seems there is code for options but I can not seem to see how to pass them correctly. I think it takes the following:
protected static $metaOptions = [
'CacheControl',
'Expires',
'StorageClass',
'ServerSideEncryption',
'Metadata',
'ACL',
'ContentType',
'ContentDisposition',
'ContentLanguage',
'ContentEncoding',
];
Any help on how to accomplish this would be appreciated.
First, you need to call getDriver so you can send over an array of options. And then you need to send the options as an array.
So for your example:
Storage::disk('s3')->getDriver()->put('/index4.txt', 'test', [ 'visibility' => 'public', 'Expires' => 'Expires, Fri, 30 Oct 1998 14:19:41 GMT']);
Be aware that if you're setting Cache-Control it has to be passed as CacheControl. This may well be true for other keys with non-alphanumierc characters.
If you want to have global defaults with headers, this works in Laravel 5.4. Change your config/filesystems.php file like so:
s3' => [
'driver' => 's3',
'key' => env('AWS_KEY'),
'secret' => env('AWS_SECRET'),
'region' => env('AWS_REGION'),
'bucket' => env('AWS_BUCKET'),
'options' => ['CacheControl' => 'max-age=315360000, no-transform, public',
'ContentEncoding' => 'gzip']
],
After attempting the above answers and failing to be able to add customer user-metadata it turns out that after digging through the SDK code it is a bit easier than I thought (Assume $path is a path to an image file). I didn't appear to need to call the getDriver() method either, not too sure if that makes any difference with the current version of the AWS SDK.
Storage::put(
'image.jpg',
file_get_contents($path),
[
'visibility' => 'public',
'Metadata' => [
'thumb' => '320-180',
],
]
);
So now if you view the newly uploaded file in S3 you will see the custom metadata:
Hope this helps someone.
The answer from #Paras is good. But there is one thing that can confuse newcommers:
'options' => [
'Expires' => gmdate('D, d M Y H:i:s GMT', strtotime('+1 month')),
>>> WRONG visibility' => 'public', WRONG <<<
]
If you want to define global options for the HEADERS, the options array is the right way to go. But if you also want to define the visibility, you can not mix it up. Visibility has to be defined outside of options array.
👍
'visibility' => 'public',
'options' => ['Expires' => gmdate('D, d M Y H:i:s GMT', strtotime('+1 month'))]
This is an example of how to upload a file to S3 as of Laravel 5.8 with expiry and cache control headers, for example:
Storage::put($directory . '/' . $imageName,
$image, [
'visibility' => 'public',
'Expires' => gmdate('D, d M Y H:i:s \G\M\T', time() + (60 * 60 * 24 * 7)),
'CacheControl' => 'max-age=315360000, no-transform, public',
]);
Also don't forget to uncheck the 'Disable cache' checkbox in Chrome if you're testing and it never seems to work, that got me bad for an hour when my browser wouldn't cache things even though I finally got the headers right in S3.
For Laravel 9 users this has became more easy. You do not need to call ->getDriver() anymore. You can directly pass options to the put command.
Storage::disk('s3')->put('/index.txt', 'file content', [
// S3 Object ACL
'visibility' => 'public', // or 'private',
// HTTP Headers
'CacheControl' => 'public,max-age=315360000',
'ContentDisposition' => 'attachment; filename="index.txt"',
'Expires' => 'Thu, 12 Feb 2032 08:24:43 GMT',
// Metadata or other S3 options
'MetadataDirective' => 'REPLACE'
'Metadata' => [
'Custom-Key' => 'test',
],
])
In case you need other headers or options, please checkout the flysystem source code for all available headers and options.
https://github.com/thephpleague/flysystem-aws-s3-v3/blob/master/src/AwsS3Adapter.php#L38
public const AVAILABLE_OPTIONS = [
'ACL',
'CacheControl',
'ContentDisposition',
'ContentEncoding',
'ContentLength',
'ContentType',
'Expires',
'GrantFullControl',
'GrantRead',
'GrantReadACP',
'GrantWriteACP',
'Metadata',
'MetadataDirective',
'RequestPayer',
'SSECustomerAlgorithm',
'SSECustomerKey',
'SSECustomerKeyMD5',
'SSEKMSKeyId',
'ServerSideEncryption',
'StorageClass',
'Tagging',
'WebsiteRedirectLocation',
];
Hey I solved this problem, you need to create a custom S3 filesystem
First, create a new file CustomS3Filesystem.php and save into app/providers, this custom S3 filesystem uses the S3 Adapter, but you can add metadata and headers.
<?php namespace App\Providers;
use Storage;
use League\Flysystem\Filesystem;
use Aws\S3\S3Client;
use League\Flysystem\AwsS3v2\AwsS3Adapter as S3Adapter;
use Illuminate\Support\ServiceProvider;
class CustomS3Filesystem extends ServiceProvider {
public function boot()
{
Storage::extend('s3_custom', function($app, $config)
{
$s3Config = array_only($config, ['key', 'region', 'secret', 'signature', 'base_url']);
$flysystemConfig = ['mimetype' => 'text/xml'];
$metadata['cache_control']='max-age=0, no-cache, no-store, must-revalidate';
return new Filesystem(new S3Adapter(S3Client::factory($s3Config), $config['bucket'], null, ['mimetype' => 'text/xml', 'Metadata' => $metadata]), $flysystemConfig);
});
}
public function register()
{
//
}
}
Add provider into providers list at config/app.php
'App\Providers\CustomS3Filesystem',
create new filesistem name in config/filesystems
's3-new' => [
'driver' => 's3_custom',
'key' => 'XXX',
'secret' => 'XXX',
'bucket' => 'XXX',
],
Use the new created custom s3 adapter
Storage::disk('s3-new')->put(filename, file_get_contents($file), public);
I used laravel documentation to customize the s3 adapter
http://laravel.com/docs/5.0/filesystem#custom-filesystems
I hope this may help you.
I am using Laravel 4.2, but I think my solution might also help on Laravel 5.0 (cannot say for sure, as I have not tried to upgrade yet). You need to update the meta options in the config for the Flysystem driver that you are using. In my case, I created a connection called s3static to access the bucket where I am storing images that will not be changing.
My config file:
's3static' => [
'driver' => 'awss3',
'key' => 'my-key',
'secret' => 'my-secret',
'bucket' => 'my-bucket',
// 'region' => 'your-region',
// 'base_url' => 'your-url',
'options' => array(
'CacheControl' => 'max_age=2592000'
),
// 'prefix' => 'your-prefix',
// 'visibility' => 'public',
// 'eventable' => true,
// 'cache' => 'foo'
],
Now when I put any files on to S3 using this connection, they have the Cache-Control meta data set.
To expand on #sergiodebcn 's answer, here is the same CustomS3Filesystem class working for S3 v3 and the latest Laravel. Note I have removed the XML mimetype and set up a 5 day cache time:
namespace App\Providers;
use Illuminate\Support\Arr;
use Storage;
use League\Flysystem\Filesystem;
use Aws\S3\S3Client;
use League\Flysystem\AwsS3v3\AwsS3Adapter as S3Adapter;
use Illuminate\Support\ServiceProvider;
class CustomS3Filesystem extends ServiceProvider
{
/**
* Format the given S3 configuration with the default options.
*
* #param array $config
* #return array
*/
protected function formatS3Config(array $config)
{
$config += ['version' => 'latest'];
if ($config['key'] && $config['secret']) {
$config['credentials'] = Arr::only($config, ['key', 'secret']);
}
return $config;
}
/**
* Bootstrap a custom filesystem
*
* #return void
*/
public function boot()
{
Storage::extend('s3_custom', function($app, $config)
{
$s3Config = $this->formatS3Config($config);
return new Filesystem(
new S3Adapter(
new S3Client($s3Config),
$config['bucket'],
null,
[
'CacheControl' => 'max-age=432000'
]
)
);
});
}
public function register()
{
//
}
}
Using Laravel 8 here:
I didn't see this mentioned elsewhere, but the metadata option key => values listed by Christoph Kluge
appear to only accept string values, and fail silently if passed an integer, bool, etc... So if you're passing in a variable you'll need to convert to a string value:
$fileID = $fileData['FileId'];
$fileExt = $fileData['FileExtension'];
$fileUnique = $fileData['UniqueFileId'];
$isImage = $fileData['IsImage'];
$isDefault = $fileData['IsDefaultImage'];
$filePath = $fileUnique . "." . $fileExt;
$file = $mp->fileID($fileID)->get();
if (Storage::disk('s3')->missing('img/' . $filePath)) {
Storage::disk('s3')->put(
'img/' . $filePath,
$file,
[
// Metadata or other S3 options
'MetadataDirective' => 'REPLACE',
'Metadata' => [
'is-image' => strval($isImage),
'is-default' => strval($isDefault),
'unique-file-id' => strval($fileUnique),
'file-extension' => strval($fileExt),
]
]
);
echo nl2br('uploading file: ' . $filePath . "\n");
} else {
echo nl2br('file already exists:' . $filePath . "\n");
}

Categories