In a Symfony 4.3 application with Elastcsearc 6.8 plus friendsofsymfony/elastica-bundle v5.1.0
an index creation task takes 18 minutes to complete either with or without enqueue/enqueue-bundle 0.9.12 and enqueue/fs 0.9.12. Is there a package I'm missing (altho' enqueue says it's a complete solution) or a configuration error?
fos_elasitica.yaml:
fos_elastica:
serializer: ~
clients:
default: { host: localhost, port: 9200 }
indexes:
house_date:
types:
house_date:
serializer:
groups: [house_date]
persistence:
# the driver can be orm, mongodb or phpcr
driver: orm
model: App\Entity\Contact
provider: ~
finder: ~
enqueue.yaml:
enqueue:
default:
transport: '%env(resolve:ENQUEUE_DSN)%'
client: ~
enqueue_elastica:
transport: '%enqueue.default_transport%
'
Edit:
After much exploration I've inched along but without ultimate success. Added was enqueue/elastica-bundle and enqueue.yaml has been edited to appear as above.
[An identical installation in Windows reaches a 256M memory limit at about 54% completion, again regardless of the presence of the enqueue components.]
It is likely true that the seemingly long time to populate the index was the result of an improper definition. The definition incorporated four entities (via relationships). By changing Contact to Household, which has a one to many relationship with Contact, time to populate the index was reduced was reduced by a factor of 10. As a result I'm abandoning this question and marking it Answered.
Related
In a Symfony 4.3 app with friendsofsymfony/elastica-bundle 5.1.0, enqueue/elastica-bundle 0.9.3, and enqueue/fs 0.9.12 installation of the latter ends with:
The child node "transport" at path "enqueue_elastica" must be
configured.
I've tried multiple permutations in enqueue.yaml, including the only example I've found.
enqueue.yaml:
enqueue:
default:
transport: '%env(resolve:ENQUEUE_DSN)%'
client: ~
.env includes:
###> enqueue/enqueue-bundle ###
ENQUEUE_DSN=null://
###< enqueue/enqueue-bundle
Elasticsearch 6.8 is installed. Relatively simple indexes are readily created. A more complex index fails with running out of memory - thus the need for enqueue.
What is an appropriate configuration of enqueue.yaml for filesystem transport?
Edit: Curiously, the Ubuntu 18 Hyper-V virtual machine was able to slog through the populating without enqueue while the Windows host failed at 94200/156865.
The seemingly correct config has:
.env:
...
###> enqueue/enqueue-bundle ###
ENQUEUE_DSN="file://%VAR_DIR%/enqueue"
###< enqueue/enqueue-bundle ###
This needed to accompanied by
enqueue.yaml:
enqueue:
default:
transport:
dsn: '%env(resolve:ENQUEUE_DSN)%'
path: '%kernel.project_dir%/var/queue' ## probably just a placeholder
client: ~
While the above avoids any errors being thrown it does not allow the populating to complete. I'm officially stuck. Time to look at reducing index complexity and multi index searches.
You should add
enqueue_elastica:
transport: '%enqueue.default_transport%'
doctrine: ~
In your enqueue.yaml config
I am busy implementing elasticsearch for a customer project. Elasticsearch seems pretty powerful and has a neat API.
Integrated with FOSElasticaBundle i have the following configuration:
fos_elastica:
clients:
default:
host: localhost
port: 9200
indexes:
app:
client: default
settings:
index:
analysis:
filter:
synonym:
type: synonym
synonyms: ["katze, katz"]
analyzer:
synonyms:
tokenizer: standard
filter: [ lowercase, synonym ]
types:
article:
mappings:
name: { analyzer: synonyms }
shortDescription: ~
description: ~
categories:
type: "nested"
properties:
name: ~
persistence:
driver: orm
model: ShopBundle\Entity\Catalog\Article
provider: ~
finder: ~
listener: ~
When i search for "Katze" (German for cat), i get the expected results (Articles having the string "Katze" in its name. When i search for "Katz", i get 0 results.
This is a simpler configuration then i really use. I tried it with nGram tokenizer and other custom analyzers... I don't get rid of the feeling that elastic isn't indexing the field with analyzed content.
When i use the "_analyze" API endpoint, and i send a string and the analyzer name (one of my custom analyzers), the result (list of tokens) looks right!
Do i miss something?
Version of FOSElastica is 4.0.1 and elasticsearch server 5.6.3.
I have a reporting application build with Symfony 2.8.14/ Doctrine. One of my reports takes about 2 minutes to run and executes a series of queries (https://dba.stackexchange.com/questions/157981/reporting-query-blocks-other-query-but-isolation-level-read-uncommitted-set/159495#159495).
I have found that the locking appears to be happening at Symfony level, because the same pages can be loaded no problem if I switch into app_dev.php/ or run the MySQL queries on the command line, while the report is running.
Is there a connection limit or other locking I could have turned on accidentally?
My Doctrine configuration
# Doctrine Configuration
doctrine:
dbal:
default_connection: default
connections:
default:
driver: pdo_mysql
host: "%database_host%"
port: "%database_port%"
dbname: "%database_name%"
user: "%database_user%"
password: "%database_password%"
charset: UTF8
options:
1001: true
orm:
auto_generate_proxy_classes: "%kernel.debug%"
default_entity_manager: default
entity_managers:
default:
connection: default
naming_strategy: doctrine.orm.naming_strategy.underscore
mappings:
AppBundle: ~
FOSUserBundle: ~
errorlog:
connection: default
naming_strategy: doctrine.orm.naming_strategy.underscore
mappings:
AppBundle: ~
Apache2 configuration - mpm-itk
<IfModule mpm_itk_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxRequestWorkers 150
MaxConnectionsPerChild 0
</IfModule>
Not using PHP-FPM as stated in comments, but using mod_php:
<FilesMatch \.php$>
SetHandler application/x-httpd-php
</FilesMatch>
I'm running MySQL 5.7.11 on m4.2xlarge which according to this: http://pushentertainment.com/rds-connections-by-instance-type/ allows for 2500+ connections.
Almost every time I've encountered locking that wasn't in the database, it has been PHP session serialization.
With the default Symfony NativeFileSessionHandler, PHP will wait to obtain a file lock before opening the session file; the lock won't be released until session is closed (ie the request is finished). This helps avoid race conditions between processes reading/writing session data.
If this is the cause then opening 2 tabs in the same browser (print $session->getId to confirm the session ID is shared) will block, but trying with different browsers (different session IDs) will not block. Be aware that depending on the state of ignore_user_abort() previous requests cancelled in the browser but still processing will also block any new requests.
As to why this would work on dev/prod, differences in session handler settings in your config file will do this.
Alternately, if that's not the cause I'd use strace -p PID and /proc/PID to determine what system call the apache/PHP process is blocked on (can be annoying to work out which is the blocked process but you only have 5 apache processes and a 2 minute window to find the right one)
It's not caused by the symfony, its caused by the webserver. You can refer to my post about the laravel another php mvc frame requests.
To solve this run the time cost process in the background.
I am trying to set up FOSElasticaBundle.
Composer json entry:
"friendsofsymfony/elastica-bundle": "^3.2"
I was following official docs tutorial, but this line doesn't work:
$finder = $this->container->get('fos_elastica.finder.app.user');
error i get:
You have requested a non-existent service "fos_elastica.index.app.user".
Do you have any idea why ?
Things i checked:
AppKernel.php contains 'new FOS\ElasticaBundle\FOSElasticaBundle(),'
Profiler under 'Configuration' tab does not show 'FOSElastica' as enabled :(
config.yml:
fos_elastica:
clients:
default: { host: localhost, port: 9200 }
indexes:
app:
types:
user:
mappings:
email: ~
persistence:
# the driver can be orm, mongodb, phpcr or propel
# listener and finder are not supported by
# propel and should be removed
driver: orm
model: AppBundle\Entity\User
provider: ~
listener: ~
finder: ~
Thanks in advance for any insights, guys :)
Turns out bundle didn't install properly.
"minimum-stability": "dev",
in composer.json on fresh installation worked
I start using Redis on me project (php-redis). Is a Symfony2 project and i found the:
https://github.com/snc/SncRedisBundle
I follow the installation process and i configured:
Some clients to store no-sql data and cache
Sessions storage
Doctrine metada, result and query cache
I create a new entity in a bundle and i fail because i create it at yml and i have all others with annotation system, so i delete yml format and create the annotation.
Every change i make on the annotation class (change the table name for example), is not affecting the schema or the database, even i recreate the database or try to execute cache:clear with all the options.
If i just comment the redis doctrine configuration lines, it works and i can see the changes on the schema.
Im maybe forgetting something, or i cant really find how to clean that doctrine redis cache.
¿I have to manually clean any position on the redis client use for caching?
Here is the configuration:
#Snc Redis Bundle
snc_redis:
clients:
d2a:
type: phpredis
alias: d2a
dsn: redis://localhost/1
cache:
type: phpredis
alias: cache
dsn: redis://localhost
logging: true
session:
client: d2a
prefix: redis_session
doctrine:
metadata_cache:
client: cache
entity_manager: default # the name of your entity_manager connection
document_manager: default # the name of your document_manager connection
result_cache:
client: cache
entity_manager: [default, read] # you may specify multiple entity_managers
query_cache:
client: cache
entity_manager: default
The easiest way but not the best one is to flush redis db with doctrine cache. Run
php app/console redis:flushdb --client=cache
(Not tested!) Another way is to setup doctrine metadata cache in doctrine config http://symfony.com/doc/current/reference/configuration/doctrine.html#caching-drivers
orm:
entity_managers:
# A collection of different named entity managers (e.g. some_em, another_em)
some_em:
metadata_cache_driver:
type: array # Required
host: ~
port: ~
instance_class: ~
class: ~