When running drush cim commands in Drupal 8 I get the following error:
Command cim was not found. Drush was unable to query the database. As a result, many commands are unavailable.
Re-run your command with --debug to see relevant log messages.
and when I run the drush cim --debug show the following:
$ drush cim --debug
[preflight] Redispatch to site-local Drush: C:\xampp\htdocs\executive-coatings\docroot/vendor/drush/drush/drush.
[preflight] Config paths: C:/xampp/htdocs/executive-coatings/docroot/vendor/drush/drush/drush.yml
[preflight] Alias paths: C:/xampp/htdocs/executive-coatings/docroot/drush/sites,C:/xampp/htdocs/executive-coatings/drush/sites,C:/xampp/htdocs/executive-coatings/docroot/drush/sites
[preflight] Commandfile search paths: C:\xampp\htdocs\executive-coatings\docroot\vendor\drush\drush\src
[debug] Bootstrap further to find cim [0.27 sec, 6.78 MB]
[debug] Trying to bootstrap as far as we can [0.27 sec, 6.78 MB]
[bootstrap] Drush bootstrap phase: bootstrapDrupalRoot() [0.27 sec, 6.78 MB]
[bootstrap] Change working directory to C:\xampp\htdocs\executive-coatings\docroot [0.27 sec, 6.78 MB]
[bootstrap] Initialized Drupal 8.6.13 root directory at C:\xampp\htdocs\executive-coatings\docroot [0.28 sec, 6.9 MB]
[bootstrap] Drush bootstrap phase: bootstrapDrupalSite() [0.28 sec, 7.15 MB]
[bootstrap] Initialized Drupal site default at sites/default [0.29 sec, 7.37 MB]
[debug] Could not find a Drupal settings.php file at sites/default/settings.php. [0.29 sec, 7.37 MB]
[bootstrap] Drush bootstrap phase: bootstrapDrupalConfiguration() [0.29 sec, 7.37 MB]
[debug] Add service modifier [0.29 sec, 7.51 MB]
[bootstrap] Unable to connect to database. More information may be available by running `drush status`. This may occur when Drush is trying to bootstrap a site that has not been installed or does not have a configured database. In this case you can select another site with a working database setup by specifying the URI to use with the --uri parameter on the command line. See `drush topic docs-aliases` for details. [0.29 sec, 7.51 MB]
[debug] Bootstrap phase bootstrapDrupalDatabase() failed to validate; continuing at bootstrapDrupalConfiguration() [0.29 sec, 7.51 MB]
[debug] Done with bootstrap max in Application::find(): trying to find cim again. [0.29 sec, 7.51 MB]
In Application.php line 239:
[Symfony\Component\Console\Exception\CommandNotFoundException]
Command cim was not found. Drush was unable to query the database. As a result, many commands are unavailable. Re-r
un your command with --debug to see relevant log messages.
Exception trace:
() at C:\xampp\htdocs\executive-coatings\docroot\vendor\drush\drush\src\Application.php:239
Drush\Application->bootstrapAndFind() at C:\xampp\htdocs\executive-coatings\docroot\vendor\drush\drush\src\Application.php:192
Drush\Application->find() at C:\xampp\htdocs\executive-coatings\docroot\vendor\symfony\console\Application.php:236
Symfony\Component\Console\Application->doRun() at C:\xampp\htdocs\executive-coatings\docroot\vendor\symfony\console\Application.php:148
Symfony\Component\Console\Application->run() at C:\xampp\htdocs\executive-coatings\docroot\vendor\drush\drush\src\Runtime\Runtime.php:118
Drush\Runtime\Runtime->doRun() at C:\xampp\htdocs\executive-coatings\docroot\vendor\drush\drush\src\Runtime\Runtime.php:49
Drush\Runtime\Runtime->run() at C:\xampp\htdocs\executive-coatings\docroot\vendor\drush\drush\drush.php:72
require() at C:\xampp\htdocs\executive-coatings\docroot\vendor\drush\drush\drush:4
You probably (like me) are trying to import all the exported configurations from some existing-site (from here on referenced as example.com) into your local (localhost/PROJECT_NAME) for development reasons, but that is currently (middle 2019) not directly supported, and you will have to import the database first, see below for more.
Steps:
Export and download the database from your existing-site (e.g. example.com/admin/config/development/backup_migrate).
In your local, ensure you have enough RAM (maybe close some programs).
Create your local database (DATABASE_NAME) and import previously exported one, using PhpMyAdmin (or something like that, e.g. http://localhost/phpmyadmin/).
Ensure the settings.local.php file exists (in ./sites/default/settings directory), and when it does not exist create it and configure it, for example like:
<?php
$databases['default']['default'] = [
'database' => 'DATABASE_NAME',
'username' => 'root',
'password' => '',
'prefix' => '',
// 'collation' => 'utf8mb4_general_ci',
'host' => '127.0.0.1',
'port' => '3306',
'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
'driver' => 'mysql',
];
$settings['drupal_env'] = 'dev';
// Location of the site configuration files:
// (Path used by "drush config-export" command)
# $config_directories['sync'] = '../config/d8_sync';
Ensure the settings.php file exists (in ./sites/default directory), and that it includes above mentioned settings.local.php file (will attach my default settings.php file).
Goto localhost/PROJECT_NAME/core/install.php?rewrite=ok&langcode=en&profile=standard&continue=1 link (just to ensure it shows Drupal is already installed).
Run drush cr (it should now work and clear the cache).
settings.php file:
This is my default settings.php file, which should include settings.local.php (if it exists).
// Salt for one-time login links, cancel links, form tokens, etc:
$settings['hash_salt'] = 'Dxl656Ddme9FyAvn0y02nrnbETVxVnHZwBLilbjSkQLH0-DHqQd2BZL8yPoM0lRCNKRx7_yqVA';
// Deployment identifier:
# $settings['deployment_identifier'] = \Drupal::VERSION;
// Access control for update.php script:
$settings['update_free_access'] = FALSE;
// External access proxy settings:
# $settings['http_client_config']['proxy']['http'] = 'http://proxy_user:proxy_pass#example.com:8080';
# $settings['http_client_config']['proxy']['https'] = 'http://proxy_user:proxy_pass#example.com:8080';
# $settings['http_client_config']['proxy']['no'] = ['127.0.0.1', 'localhost'];
// Reverse Proxy Configuration:
# $settings['reverse_proxy'] = TRUE;
/**
* Specify every reverse proxy IP address in your environment.
* This setting is required if $settings['reverse_proxy'] is TRUE.
*/
# $settings['reverse_proxy_addresses'] = ['a.b.c.d', ...];
// Reverse proxy trusted headers:
# $settings['reverse_proxy_trusted_headers'] = \Symfony\Component\HttpFoundation\Request::HEADER_X_FORWARDED_ALL | \Symfony\Component\HttpFoundation\Request::HEADER_FORWARDED;
// Page caching:
# $settings['omit_vary_cookie'] = TRUE;
// Cache TTL for client error (4xx) responses:
# $settings['cache_ttl_4xx'] = 3600;
// Expiration of cached forms:
# $settings['form_cache_expiration'] = 21600;
// Class Loader:
# $settings['class_loader_auto_detect'] = FALSE;
// Authorized file system operations:
# $settings['allow_authorize_operations'] = FALSE;
/**
* Default mode for directories and files written by Drupal.
*
* Value should be in PHP Octal Notation, with leading zero.
*/
# $settings['file_chmod_directory'] = 0775;
# $settings['file_chmod_file'] = 0664;
// Public file base URL:
# $settings['file_public_base_url'] = 'http://downloads.example.com/files';
// Public file path:
# $settings['file_public_path'] = 'sites/default/files';
// Private file path:
# $settings['file_private_path'] = '';
// Session write interval:
# $settings['session_write_interval'] = 180;
// String overrides:
# $settings['locale_custom_strings_en'][''] = [
# 'forum' => 'Discussion board',
# '#count min' => '#count minutes',
# ];
// A custom theme for the offline page:
# $settings['maintenance_theme'] = 'bartik';
/**
* If you encounter a situation where users post a large amount of text, and
* the result is stripped out upon viewing but can still be edited, Drupal's
* output filter may not have sufficient memory to process it. If you
* experience this issue, you may wish to uncomment the following two lines
* and increase the limits of these variables. For more information, see
* http://php.net/manual/pcre.configuration.php.
*/
# ini_set('pcre.backtrack_limit', 200000);
# ini_set('pcre.recursion_limit', 200000);
// Active configuration settings:
# $settings['bootstrap_config_storage'] = ['Drupal\Core\Config\BootstrapConfigStorageFactory', 'getFileStorage'];
// Configuration overrides:
# $config['system.file']['path']['temporary'] = '/tmp';
# $config['system.site']['name'] = 'My Drupal site';
# $config['system.theme']['default'] = 'stark';
# $config['user.settings']['anonymous'] = 'Visitor';
// Fast 404 pages:
# $config['system.performance']['fast_404']['exclude_paths'] = '/\/(?:styles)|(?:system\/files)\//';
# $config['system.performance']['fast_404']['paths'] = '/\.(?:txt|png|gif|jpe?g|css|js|ico|swf|flv|cgi|bat|pl|dll|exe|asp)$/i';
# $config['system.performance']['fast_404']['html'] = '<!DOCTYPE html><html><head><title>404 Not Found</title></head><body><h1>Not Found</h1><p>The requested URL "#path" was not found on this server.</p></body></html>';
// Load services definition file:
$settings['container_yamls'][] = $app_root . '/' . $site_path . '/services.yml';
// Override the default service container class:
# $settings['container_base_class'] = '\Drupal\Core\DependencyInjection\Container';
// Override the default yaml parser class:
# $settings['yaml_parser_class'] = NULL;
// The default list of directories that will be ignored by Drupal's file API:
$settings['file_scan_ignore_directories'] = [
'node_modules',
'bower_components',
];
// The default number of entities to update in a batch process:
$settings['entity_update_batch_size'] = 50;
// Entity update backup:
$settings['entity_update_backup'] = TRUE;
/**
* Load local development override configuration, if available.
*
* Use settings.local.php to override variables on secondary (staging,
* development, etc) installations of this site. Typically used to disable
* caching, JavaScript/CSS compression, re-routing of outgoing emails, and
* other things that should not happen on development and testing sites.
*
* Keep this code block at the end of this file to take full effect.
*/
if (file_exists($app_root . '/' . $site_path . '/settings.local.php')) {
include $app_root . '/' . $site_path . '/settings.local.php';
}
In my case, I just did not started virtual server (wamp).
Please run this command drush config-import before you run drush cim command.
For more info please refer to this link here
I hope it solves your problem.
Related
my wiki :
Product Version
MediaWiki 1.32.0
PHP 7.0.33-0ubuntu0.16.04.5 (apache2handler)
MySQL 5.7.25-0ubuntu0.16.04.2
ICU 55.1
Lua 5.1.5
link: wikijoo.ir
my config.yaml seup:
worker_heartbeat_timeout: 300000
logging:
level: info
#metrics:
# type: log
services:
- module: src/lib/index.js
entrypoint: apiServiceWorker
conf:
# For backwards compatibility, and to continue to support non-static
# configs for the time being, optionally provide a path to a
# localsettings.js file. See localsettings.example.js
#localsettings: ./localsettings.js
# Set your own user-agent string
# Otherwise, defaults to:
# 'Parsoid/<current-version-defined-in-package.json>'
#userAgent: 'My-User-Agent-String'
# Configure Parsoid to point to your MediaWiki instances.
mwApis:
- # This is the only required parameter,
# the URL of you MediaWiki API endpoint.
uri: 'http://wikijoo.ir/api.php/'
# The "domain" is used for communication with Visual Editor
# and RESTBase. It defaults to the hostname portion of
# the `uri` property above, but you can manually set it
# to an arbitrary string. It must match the "domain" set
# in $wgVirtualRestConfig.
domain: 'wikijoo.ir' # optional
# To specify a proxy (or proxy headers) specific to this prefix
and my localsetting.php :
$wgVirtualRestConfig['modules']['parsoid'] = [
// URL to the Parsoid instance - use port 8142 if you use the Debian package - the parameter 'URL' was first used but is now deprecated (string)
'url' => 'http://wikijoo.ir',
// Parsoid "domain" (string, optional) - MediaWiki >= 1.26
'domain' => 'wikijoo.ir',
// Parsoid "prefix" (string, optional) - deprecated since MediaWiki 1.26, use 'domain'
'prefix' => 'wikijoo.ir',
// Forward cookies in the case of private wikis (string or false, optional)
'forwardCookies' => true,
// request timeout in seconds (integer or null, optional)
'timeout' => null,
// Parsoid HTTP proxy (string or null, optional)
'HTTPProxy' => null,
// whether to parse URL as if they were meant for RESTBase (boolean or null, optional)
'restbaseCompat' => null,
];
and parsoid is active
sudo systemctl status parsoid:
● parsoid.service - LSB: Web service converting HTML+RDFa to MediaWiki wikitext and back
Loaded: loaded (/etc/init.d/parsoid; bad; vendor preset: enabled)
Active: active (running) since Mon 2019-08-05 10:44:26 UTC; 1 day 20h ago
Docs: man:systemd-sysv-generator(8)
Process: 3861 ExecStop=/etc/init.d/parsoid stop (code=exited, status=0/SUCCESS)
Process: 3875 ExecStart=/etc/init.d/parsoid start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/parsoid.service
├─3884 /bin/sh -c /usr/bin/nodejs /usr/lib/parsoid/src/bin/server.js -c /etc/mediawiki/parsoid/config.yaml >> /var/log/parsoid/parsoid.log 2>&1
└─3887 /usr/bin/nodejs /usr/lib/parsoid/src/bin/server.js -c /etc/mediawiki/parsoid/config.yaml
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
When I want to edit an article, I get the message below:
Error loading data from server: apierror-visualeditor-docserver-http: HTTP 404. Would you like to try again?
please help me
best regards
I'm missing something easy I think:
Testing a mailgun install on an EC2 Linux instance.
The following code works when I use a putty session:
php /var/www/html/[thefilebelow.php]
But fails when I go to a browser and use
http://myexample.com/[thefilebelow.php]
That gives a 500 error
[thefilebelow.php]:
# Include the Autoloader (see "Libraries" for install instructions)
require '/home/ec2-user/vendor/autoload.php';
use Mailgun\Mailgun;
# Instantiate the client.
$mgClient = new Mailgun('kxxxxxxxxxx');
$domain = "mg.myexample.com";
# Make the call to the client.
$result = $mgClient->sendMessage($domain, array(
'from' => 'bob <info#lxxxxx.com>',
'to' => 'Steve <xxxxx#gmail.com>',
'subject' => 'Hello',
'text' => 'Testing some Mailgun awesomness!'
));
ERROR LOG:
PHP Fatal error: require(): Failed op ning required '/home/ec2-user/vendor/autoload.php' (include_path='.:/usr/share/pear7:/usr/share/php7') in /var/www/html/myfilebelow.php on line 3
Just to be clear - the location of the require file is correct.
Permissions for /var/www/htmlmyfilebelow.php ec2-user:www
Permissions for /home/ec2-user/vendor/ ec2-user:www
(permissions are the same for include file and script)
So the answer is that when installing composer on an ec2 instance (and generally on other servers), be sure to navigate out of the landing directory when opening a an ec2-user session in putty, if the landing directory is your user directory.
Move up to /var/www/ or some other directory owned by the www group.
In the settings.php you have to put in similar code as below settings.php.
I get an error in the nginx log file showing that it cannot find the path for DrupalMongoDBCache.
FastCGI sent in stderr: "PHP message: PHP Fatal error: Class 'DrupalMongoDBCache' not found in /var/www/drupal-test/includes/cache.inc
If I change the below following path to be exact, then i get a different error:
$conf['cache_backends'][] = '/var/www/drupal-test/sites/all/modules/mongodb/mongodb_cache/mongodb_cache.inc';
$conf['session_inc'] = '/var/www/drupal-test/sites/all/modules/mongodb/mongodb_session/mongodb_session.inc';
FastCGI sent in stderr: "PHP message: PHP Fatal error: require_once(): Failed opening required '
/var/www/drupal-test/
drupal-test/sites/all/modules/mongodb/mongodb_cache/mongodb_cache.inc'
Notice it copies the same beginning path twice. why!? I need php to be able to go to the correct directory in able to not have this 500 internal error problem.
Please help =)
Copied code into settings.php:
#MongoDB
$conf['mongodb_connections'] = array(
'default' => array( // Connection name/alias
'host' => 'localhost', // Omit USER:PASS# if Mongo isn't configured to use authentication.
'db' => 'tomsadvice-mongodb' // Database name. Make something up, mongodb will automatically create the database.
),
);
include_once('.includes/cache.inc');
# -- Configure Cache
$conf['cache_backends'][] = 'sites/all/modules/mongodb/mongodb_cache/mongodb_cache.inc';
$conf['cache_class_cache'] = 'DrupalMongoDBCache';
$conf['cache_class_cache_bootstrap'] = 'DrupalMongoDBCache';
$conf['cache_default_class'] = 'DrupalMongoDBCache';
# -- Don't touch SQL if in Cache
$conf['page_cache_without_database'] = TRUE;
$conf['page_cache_invoke_hooks'] = FALSE;
# Session Caching
$conf['session_inc'] = 'sites/all/modules/mongodb/mongodb_session/mongodb_session.inc';
$conf['cache_session'] = 'DrupalMongoDBCache';
# Field Storage
$conf['field_storage_default'] = 'mongodb_field_storage';
# Message Queue
$conf['queue_default_class'] = 'MongoDBQueue';
?>
which version do you use? is it the rc2 or dev?
If you are using the dev version, just try the rc2 version.
I implementing Mmoreram gearman bundle in my symfony(2.4) project.
I have website that users make action and triggers jobs.
like:
# Get Gearman and tell it to run in the background a 'job'
$id = $this->params['gearman']->doHighBackgroundJob('MYBundleServicesPublishWorker~publish',
json_encode($parameters)
);
And i have one worker that run infinitely and do the jobs (iterations: 0)
I run it from command line once in background:
nohup php /myproject/app/console gearman:worker:execute MYBundleServicesPublishWorker > /tmp/error_log.txt > /tmp/output_log.txt &
The config look like:
gearman:
# Bundles will parsed searching workers
bundles:
# Name of bundle
MyBundle:
# Bundle name
name: myBundle
# Bundle search can be enabled or disabled
active: true
# If any include is defined, Only these namespaces will be parsed
# Otherwise, full Bundle will be parsed
include:
- Services
- EventListener
# Namespaces this Bundle will ignore when parsing
ignore:
- DependencyInjection
- Resources
# default values
# All these values will be used if are not overwritten in Workers or jobs
defaults:
# Default method related with all jobs
# do // deprecated as of pecl/gearman 1.0.0. Use doNormal
# doNormal
# doBackground
# doHigh
# doHighBackground
# doLow
# doLowBackground
method: doNormal
# Default number of executions before job dies.
# If annotations defined, will be overwritten
# If empty, 0 is defined by default
iterations: 0
# execute callbacks after operations using Kernel events
callbacks: true
# Prefix in all jobs
# If empty name will not be modified
# Useful for rename jobs in different environments
job_prefix: null
# Autogenerate unique key in jobs/tasks if not set
# This key is unique given a Job name and a payload serialized
generate_unique_key: true
# Prepend namespace when callableName is built
# By default this variable is set as true
workers_name_prepend_namespace: true
# Server list where workers and clients will connect to
# Each server must contain host and port
# If annotations defined, will be full overwritten
#
# If servers empty, simple localhost server is defined by default
# If port empty, 4730 is defined by efault
servers:
localhost:
host: 127.0.0.1
port: 4730
doctrine_cache:
providers:
gearman_cache:
type: apc
namespace: doctrine_cache.ns.gearman
my problem is when i run app/console cache:clear and after that job come in the worker crash
its throw error :
PHP Warning:
require_once(/myproject/app/cache/dev/jms_diextra/doctrine/EntityManager_53a06fbf221b4.php):
failed to open stream: No such file or directory in
/myproject/app/cache/dev/appDevDebugProjectContainer.php on line 787
PHP Fatal error: require_once(): Failed opening required
'/myproject/app/cache/dev/jms_diextra/doctrine/EntityManager_53a06fbf221b4.php'
(include_path='.:/usr/share/php:/usr/share/pear') in
/myproject/app/cache/dev/appDevDebugProjectContainer.php on line 787
How can i fix it, i try to change the doctrine bundle cache type: file_system/array/apc
but it did not help
How can i overcome this?
what i am doing wrong?
Thanks in advance
i found the problem, i have in my worker this line :
$this->doctrine->resetEntityManager();
that cause this,
now i am only open connection and close it like:
$em = $this->doctrine->getEntityManager();
$em->getConnection()->connect();
# run publish command
............
# close connection
$em->getConnection()->close();
I'm trying to use Capifony with my web app in Symfony2.1 to accelerate the deployment process.
Here is my deploy.rb file :
default_run_options[:pty] = true
set :application, "mywebsite"
set :domain, "mywebsite.com"
set :deploy_to, "~/git/mywebsite.git"
set :app_path, "app"
set :repository, "git#github.com:myname/mywebsite.git"
set :scm, :git
# Or: `accurev`, `bzr`, `cvs`, `darcs`, `subversion`, `mercurial`, `perforce`, or `none`
set :user, "myserveruser" # The server's user for deploys
set :model_manager, "doctrine"
# Or: `propel`
role :web, domain # Your HTTP server, Apache/etc
role :app, domain # This may be the same as your `Web` server
role :db, domain, :primary => true # This is where Symfony2 migrations will run
set :use_composer, true
set :update_vendors, true
set :use_sudo, false
set :keep_releases, 3
set :shared_files, ["app/config/parameters.yml"]
set :shared_children, [app_path + "/logs", web_path + "/uploads"]
set :deploy_via, :rsync_with_remote_cache
set :ssh_options, { :forward_agent => true }
ssh_options[:keys] = %w(/.ssh/id_rsa)
ssh_options[:port] = xxxx
# Be more verbose by uncommenting the following line
logger.level = Logger::MAX_LEVEL
And here is my error :
The Process class relies on proc_open, which is not available on your PHP installation.
when the script runs php composer.phar update
more details here : http://pastebin.com/hNJaMvwf
But I'm in a shared hosting and my hoster told me that I can't have proc_open enabled, is there a way to get it working though ?
Thanks a lot for your help !
Composer needs to be able to run command-line processes (it does this using the symfony/process component). There is no way to have Composer run if your host does not support proc_open.
As an alternative deployment strategy, you could upload the vendor/ directory manually to the production machine (you can use the upload functionality in your Capistrano recipe). That said, virtual servers are affordable these days, and I would not recommend deploying Symfony2 applications to a shared hosting anyway. Maybe you should be looking for a different hosting solution?
I also encountered a similar (but different) problem with my web host when using Composer to install the Sematic extension for my Mediawiki installation. I was not using Cafinony but using using Putty and SSH to run Composer on a "remote' command line. Composer failed with the same error;
The Process class relies on proc_open, which is not available on your PHP installation.
However, I was able to fix it another way.
proc_open is a PHP function that is typically "disabled' by most web hosts. It is disabled by including the function in the list of disabled functions which are set with the PHP configuration setting, disable_functions. In other words, if it is included in the list it is disabled; if it is removed from the list it is enabled.
You can therefore effectively enable proc_open "on the fly" by using the php command line -d option to remove the disabled functions (which includes proc_open). In other words, by removing the list of disable_functions you effectively "enable all" functions, including proc _pen.
To use -d to enable proc_open, you must set the disable_functions setting to an empty string. This will remove all the list of disabled functions (including proc_open)
When installing at the command line using an SSH client such as Putty, use a command similar to this:
php -f composer.phar -d detect_unicode=Off -d disable_functions= require mediawiki/semantic-media-wiki "1.9.*,>=1.9.0.1"
So, if you can figure out a way to pass "-d settings" with your ruby file, you may be able to solve your problem.
I know this does not fully address your problem, but it may help others with overcoming annoying default php settings on shared servers, that get in the way of Composer!
I hope this helps someone.