I've setup a LiipImagineBundle configuration on a linux computer (xubuntu 14.10) :
routing.yml
_liip_imagine:
resource: "#LiipImagineBundle/Resources/config/routing.xml"
config.yml
liip_imagine:
resolvers:
default:
web_path: ~
filter_sets:
cache: ~
dashboard_thumb:
quality: 75
filters:
thumbnail: { size: [60, 60], mode: outbound }
and in my twig template :
<img src="{{ asset(company.logo.getPath) | imagine_filter('dashboard_thumb') }}">
All sources images are under web/uploads path
This was working fine, image thumbnails are generated under web/media/cache/dashboard_thumb/uploads/
My source files are stored under an USB stick, and i lanch server with server:run commande (so under 127.0.0.1:8000)
But recently, i lanched the server under another computer (linux mint 17) and then, image cache are not generated anymore.
when i look at the generated html source, file path for images are :
http://127.0.0.1:8000/media/cache/resolve/dashboard_thumb/uploads/myimage.png
so i dont know why there is a 'resolve' in the path
Other thing, if i launch the command :
liip:imagine:cache:resolve uploads/myimage.png
the path and image web/media/cache/dashboard_thumb/uploads/myimage.png are well created
why this doesnt work automatically?
Thanks.
Seem a problem about Setting up Permissions. Basically the System operating users for the CLI(and deploy) and the web server must be on the same group.
Check the doc for Symfony Application Configuration and Setup
PS: the command you are looking for is chown but is only a workaround an i suggest you to fix to operating user layer.
Hope this help
... so i dont know why there is a 'resolve' in the path
If You do not have a cache for your image, LiipImagineBundle (imagine_filter in your case) generates a route according this rule
liip_imagine_filter:
path: /media/cache/resolve/{filter}/{path}
defaults:
_controller: '%liip_imagine.controller.filter_action%'
methods:
- GET
requirements:
filter: '[A-z0-9_-]*'
path: .+
, and your request handles by ImagineController https://github.com/liip/LiipImagineBundle/blob/1.0/Controller/ImagineController.php
So, You see not image path, but route. Controller generates the cache and your second request to this image will give you actually path to image.
There is a problem, if your need to attach image to mail message, You have to resolve image before attaching this one.
Also if cache not generates anymore, the problem maybe in your web server configuration. Imagine that Your Nginx decides that web/media/cache/* is static content, so, the route web/media/cache/resolve just not working.
Related
I have built a symfony 2.6 web site and I deploy it in an url like this :
https://www.example.com/abc/
This url points on the "web" directory (as the root directory).
The web site is working fine, but there is two issues :
1) Web debug tool bar is not showing because the widget is pointing on https://www.example.com/ not on
https://www.example.com/abc/ and I don't understand why ?!
2) Same thing for Twig path() function, it is also pointing on https://www.example.com/ not https://www.example.com/abc/
So do you have any idea about that ?
I Finally Got it, "abc" is not a physical folder it's just a "virtual" path defined in the virtual host of www.example.com,
so while https://www.example.com/abc/ is pointing on web folder of my symfony project, which is the root repository by default in Symfony framework, the folder "abc" doesn't exist in reality,
In fact I use to modify in this script "/public_html/vendor/symfony/symfony/src/Symfony/Component/Routing/Generator/UrlGenerator.php" this way by enforcing the "/abc" part in the dynamically
generated url :
$url = $schemeAuthority."/abc".$this->context->getBaseUrl().$url;
I think Symfony doesn't take this case into account by default.
Other thing that can cause this issue too is to verify the scheme (http or https) of your website in the routing configuration (routing.yml) of your project, example :
test:
resource: "#testBundle/Controller/"
type: annotation
prefix: /
schemes: [http]
I want to be able to use debugger inside docker container and i managed to map entry point of laravel's /public/index.php to path on nginx server inside container, breakpoint in index.php is being hit, but breakpoint in default route "/" in app/http/routes.php is not, although route's code is being executed. It's laravel 5.1 default folder structure.
Working path mapping for index.php is /var/www/laravel/public - C:\Users\username\Desktop\zemke2\public (server path - project path respectively)
Printscreen here:
I need help making mapping for "/" route's breakpoints to work.
After some wondering I succeeded to figure it out. It is quite simple. The point is you map FOLDERS and you map ABSOLUTE FILESYSTEM PATHS on server and local. Browser url is IRRELEVANT, it does not matter how you run your code.
Folder containing file (or files) you want to debug on server to local folder containing that file.
In my case those are:
server path: /var/www/laravel/public - local path: C:\Users\username\Desktop\zemke2\public
server path: /var/www/laravel/app/Http - local path: C:\Users\username\Desktop\zemke2\app\Http
Linux paths are case sensitive. Checking stop debugger on first line also helps debugging problems like this.
We were storing all sonata media files on local directory earlier but now we have moved to AWS S3. After shifting to S3 now sonatamedia is unable to access old local files. Sonatamedia is trying to find old files on S3 as well. New files are uploading on S3 and accessible.
Now please advice how to sync our old data to S3 or SonataMedia bundle can look for old files on local instead of S3.
Our current SonataMedia configuration is as mentioned below
sonata_media:
filesystem:
local:
directory: %kernel.root_dir%/../web/uploads/media
create: true
s3:
bucket: %sonata_media_s3_bucket%
accessKey: %sonata_media_s3_accessKey%
secretKey: %sonata_media_s3_secretKey%
region: %sonata_media_s3_region%
create: true
.....
I was in the same dilemma, and I could successfully solve it. SonataMediaBundle has a cli sync command, basically it regenerates the routes for the media contexts based on the cdn config, so if you perform:
app/console sonata:media:sync
You'll get something like:
Please select the provider
[0] sonata.media.provider.image
[1] sonata.media.provider.file
[2] sonata.media.provider.youtube
[3] sonata.media.provider.dailymotion
[4] sonata.media.provider.vimeo
These contexts belongs to my project, you may have a similar structure. I my case I just had images, it means just the first one: sonata.media.provider.image. Then after setting your option, e.g: 0 you'll be asked to select the context, e.g:
Please select the context
[0] default
[1] news
[2] collection
[3] category
[4] profile
Just select all the contexts you currently use (of course one by one, step by step).
For each step you'll get something like:
Loaded 52 medias (batch #1, offset 0) for generating thumbs (provider: sonata.media.provider.image, context: default)
Generating thumbs for Scenario - 1
...
...
...
Done (total medias processed: 52).
Once all the processes has finished, if you list all the images within your admin dashboard, you will see all of them have the new url, that belongs to AWS S3.
As first step, assure you have not the local storage set, so your settings should look like:
sonata_media:
filesystem:
s3:
bucket: %sonata_media_s3_bucket%
accessKey: %sonata_media_s3_accessKey%
secretKey: %sonata_media_s3_secretKey%
region: %sonata_media_s3_region%
create: true
instead of:
sonata_media:
filesystem:
local:
directory: %kernel.root_dir%/../web/uploads/media
create: true
s3:
bucket: %sonata_media_s3_bucket%
accessKey: %sonata_media_s3_accessKey%
secretKey: %sonata_media_s3_secretKey%
region: %sonata_media_s3_region%
create: true
There's no need for you to set the local storage to sync the new content with AWS S3 in spite of you've been storing your images locally.
The sync process just rebuilds the path for the stored media. Right now it does not push the content to AWS S3, that why you must upload your uploads directory by hand, straight to the root of your bucket where you want to store the media from now on.
The media's documentation suggests a cdn path based on the static storage in S3, you if you are no using an static storage, I recommend you to use the default URL, e.g:
...
cdn:
# define the public base url for the uploaded media
server:
path: "https://s3.amazonaws.com/%sonata_media.s3.bucket_name%/%sonata_media.cdn.host%"
...
Let's assume you've already finished running the cli sync command and you've already uploaded your media into AWS S3.
The last one step is to re-save each content which contains images or medias (e.g: all the posts of your blog which contain images), it means you should open them one by one from your admin dashboard and then must click on the update and close button in order to update the media sources (images / videos / files) because these weren't automatically updated.
I recommend you to perform all these steps in your development / staging environment before proceeding to perform it in production.
Once you've successfully performed the previous steps, you can remove the old uploads directory (old local storage).
Done!
We have a symfony2 application. Everything was successful until we tried to create a subdomain (for a different application). For our first test with the subdomain we linked the subdomain to a route in the s2 application.
After our test the application always return a 404 code for the route used in the previous test. We return back all configurations within the server and the problem keeps.
The route is "/usuario/iniciar-sesion".
Our original configuration for the routing is:
#/src/AppBundle/Resources/routing.yml
app_user:
resource: routing/user.yml
prefix: /usuario
#/src/AppBundle/Resources/routing.yml
app_login:
path: /iniciar-sesion
defaults: { _controller: AppBundle:User:login }
We execute the next commands in console to check the routing:
php console router:debug
php console router:match /usuario/iniciar-sesion
and everything looks fine.
Everything else works fine. At this moment the hotfix is changing the prefix (we called "usuarios") and the application runs successful. After it we tried return back the original prefix, but the application keeps return the 404 code.
We execute a lot of cache:clear --env=prod and manually delete cache dir. In our local enviroment everything works fine.
What else we can check?
So as you said in the comment I think you are trying to achieve this:
#/src/AppBundle/Resources/routing.yml
app_login:
host: usuario.site.com
path: /iniciar-sesion
defaults: { _controller: AppBundle:User:login }
And in the server host configuration you need to add the subdomain as a ServerAlias and also own that subdomain I think: https://www.godaddy.com/help/add-a-subdomain-that-points-to-a-server-name-19974
We resolved the problem.
We didn't consider the folders created by the server when set up the domain. We deleted them and the application responds on the original path.
i'm using this config.yml.
knp_gaufrette:
adapters:
uploaded_files:
local:
directory: "%kernel.root_dir%/../web/uploads"
create: true
filesystems:
uploaded_files:
adapter: uploaded_files
alias: uploaded_files
Now i want to access uploaded files per twig.
Also for example:
<a href="{{ path('gaufrette_download', {system: 'uploaded_files', file: 'test.txt'}) }}>{{ 'Download' | trans }}</a>
The file should have a path like...
http://localhost/web/uploads/test.txt
I want a direct access to the file(s).
No controller (action).
Is this possible? Any ideas?
If your folder is web-accessible (i.e. you can type the url http://localhost/web/uploads/test.txt in your address bar and download the file), all you have to do is map the route gaufrette_download to that path. Your bundle's routing.yml could look like this (notice the missing defaults: { controller: ... }):
gaufrette_download:
path: /web/uploads/{file}
If your .htaccess is defined properly your web server should serve the file instead of accessing your application. You might have to add requirements for file, e.g. to allow for slashes (search the symfony cookbook for this)
If you just want to skip writing a controller (action), you could just as well create an event listener which is triggered when your request matches the route.