Composer update ran via puppet times out - php

I'm using composer to manage dependencies. And basically want I want to do is automatically run composer update in puppet config when vagrant up is running.
I'm using puphpet to generate puppet files for vagrant.
I added composer::exec section in this code in the default.pp file:
if $php_values['composer'] == 1 {
class { 'composer':
target_dir => '/usr/local/bin',
composer_file => 'composer',
download_method => 'curl',
logoutput => true,
tmp_path => '/tmp',
php_package => "${php::params::module_prefix}cli",
curl_package => 'curl',
suhosin_enabled => false,
}
composer::exec { 'composer-update':
cmd => 'update',
cwd => '/var/www/myproject'
}
}
Some times I'm getting this error in output:
Error: Command exceeded timeout
Error: /Stage[main]//Composer::Exec[composer-update]/Exec[composer_update_composer-update]/returns: change from notrun to 0 failed: Command exceeded timeout
And there is no timeout property in puppet composer.
How to solve it?

Take a look at http://docs.puppetlabs.com/references/latest/type.html#exec-attribute-timeout - it is possible to set a timeout for an exec resource. If the puppet composer module does not provide an option to override that, it really should IMO. And if by a chance it is composer itself that's timing out, not puppet exec, you'd wanna try
export COMPOSER_PROCESS_TIMEOUT=600

Related

WHile uploading my laravel 6 project to 000webhosting, I got error like this: [duplicate]

I use cron job to do some CRUD operation using laravel Task Scheduling. On localhost and on my Share-Hosting server it worked fine for months until recently I keep getting this error when I run cron job on my Share-Hosting server. I did not make any changes to the code on my Share-Hosting server.
[2017-07-14 09:16:02] production.ERROR: exception 'Symfony\Component\Process\Exception\RuntimeException' with message 'The Process class relies on proc_open, which is not available on your PHP installation.' in /home/xxx/xx/vendor/symfony/process/Process.php:144
Stack trace:
But on localhost it works fine. Based on my finding online I have tried the following.
Contacted my hosting company to remove proc_open form disable PHP functions.
Hosting company provided custom php.ini file. I remove all disable_functions
Share-Hosting Server was restarted and cache was cleared
None of this fixed the issue. I am not sure of what next to try because the same project works fine on different Share-Hosting Server.
After many weeks of trying to resolve this error. The following fixes worked
Upgrade project from Laravel 5.2 to 5.4
On CPanel using "Select Php version" set PHP version to 7
Or on CPanel using "MultiPHP Manager" set PHP version to ea-php70
Now, cron job runs smoothly. I hope this helps someone.
Laravel 6 and higher (proc_open Error)
It is because of Flare error reporting service enabled in debug mode
There is a workaround for this.
Publish flare config file
php artisan vendor:publish --tag=flare-config
and in config/flare.php
Set
'collect_git_information' => false
'reporting' => [
'anonymize_ips' => true,
'collect_git_information' => false,
'report_queries' => true,
'maximum_number_of_collected_queries' => 200,
'report_query_bindings' => true,
'report_view_data' => true,
],
You can use this at your own risk:
/usr/local/bin/php -d "disable_functions=" /home/didappir/public_html/api/artisan schedule:run > /dev/null 2>&1
When Flare error reporting service enabled in debug mode you'll see this error
The solution is:
Publish flare config file
php artisan vendor:publish --tag=flare-config
in config/flare.php Set:
'reporting' => [
'anonymize_ips' => true,
'collect_git_information' => false,
'report_queries' => true,
'maximum_number_of_collected_queries' => 200,
'report_query_bindings' => true,
'report_view_data' => true,
],
'send_logs_as_events' => false,
For me removing cached version of config.php file solve problem(Laravel 6).
go to bootstrap/cache/config.php and remove file.
Also don't forget to change APP_URL to your domain address. PHP version should be as required by laravel version.
for shared host if you can't change php.ini, you should use laravel 5.8.

Laravel 5.2: The Process class relies on proc_open, which is not available on your PHP installation

I use cron job to do some CRUD operation using laravel Task Scheduling. On localhost and on my Share-Hosting server it worked fine for months until recently I keep getting this error when I run cron job on my Share-Hosting server. I did not make any changes to the code on my Share-Hosting server.
[2017-07-14 09:16:02] production.ERROR: exception 'Symfony\Component\Process\Exception\RuntimeException' with message 'The Process class relies on proc_open, which is not available on your PHP installation.' in /home/xxx/xx/vendor/symfony/process/Process.php:144
Stack trace:
But on localhost it works fine. Based on my finding online I have tried the following.
Contacted my hosting company to remove proc_open form disable PHP functions.
Hosting company provided custom php.ini file. I remove all disable_functions
Share-Hosting Server was restarted and cache was cleared
None of this fixed the issue. I am not sure of what next to try because the same project works fine on different Share-Hosting Server.
After many weeks of trying to resolve this error. The following fixes worked
Upgrade project from Laravel 5.2 to 5.4
On CPanel using "Select Php version" set PHP version to 7
Or on CPanel using "MultiPHP Manager" set PHP version to ea-php70
Now, cron job runs smoothly. I hope this helps someone.
Laravel 6 and higher (proc_open Error)
It is because of Flare error reporting service enabled in debug mode
There is a workaround for this.
Publish flare config file
php artisan vendor:publish --tag=flare-config
and in config/flare.php
Set
'collect_git_information' => false
'reporting' => [
'anonymize_ips' => true,
'collect_git_information' => false,
'report_queries' => true,
'maximum_number_of_collected_queries' => 200,
'report_query_bindings' => true,
'report_view_data' => true,
],
You can use this at your own risk:
/usr/local/bin/php -d "disable_functions=" /home/didappir/public_html/api/artisan schedule:run > /dev/null 2>&1
When Flare error reporting service enabled in debug mode you'll see this error
The solution is:
Publish flare config file
php artisan vendor:publish --tag=flare-config
in config/flare.php Set:
'reporting' => [
'anonymize_ips' => true,
'collect_git_information' => false,
'report_queries' => true,
'maximum_number_of_collected_queries' => 200,
'report_query_bindings' => true,
'report_view_data' => true,
],
'send_logs_as_events' => false,
For me removing cached version of config.php file solve problem(Laravel 6).
go to bootstrap/cache/config.php and remove file.
Also don't forget to change APP_URL to your domain address. PHP version should be as required by laravel version.
for shared host if you can't change php.ini, you should use laravel 5.8.

php5enmod mcrypt with Puppet

Another Puppet related question.
As part of my installation with Puppet, I'm installing: -
Ubuntu 14.04.2 LTS
PHP5-FPM
Nginx
MySQL etc
As part of the PHP class I have the following: -
package {[
'php5-fpm',
'php5-mysql',
'php5-cli',
'php5-mcrypt',
'php5-curl',
]:
ensure => present,
require => Exec['apt-get update'],
}
This part works fine. No issues.
Once the server has finished doing its thing, I'm able to run: -
php5enmod mcrypt
This, again, runs without issue and mcrypt is enabled in the php5-fpm installation. The problem is arising with the following code block.
exec { 'enable-mcrypt':
command => 'php5enmod mcrypt',
path => '/usr/sbin',
require => [
Package['php5-mcrypt'],
Package['php5-fpm']
],
notify => [
Service['php5-fpm'],
Service['nginx'],
],
}
I've tried running it in various incarnations, and there are no issues regarding syntax or dependencies for it to execute.
However, when I look through the debug information I'm seeing this: -
Debug: Exec[enable-mcrypt](provider=posix): Executing 'php5enmod pdo'
Debug: Executing 'php5enmod pdo'
Notice: /Stage[main]/Php/Exec[enable-mcrypt]/returns: /usr/sbin/php5enmod: 233: /usr/sbin/php5enmod: expr: not found
Notice: /Stage[main]/Php/Exec[enable-mcrypt]/returns: /usr/sbin/php5query: 181: /usr/sbin/php5query: expr: not found
Notice: /Stage[main]/Php/Exec[enable-mcrypt]/returns: /usr/sbin/php5query: 203: /usr/sbin/php5query: find: not found
Notice: /Stage[main]/Php/Exec[enable-mcrypt]/returns: WARNING:
Notice: /Stage[main]/Php/Exec[enable-mcrypt]/returns: usage: php5enmod [ -s ALL|sapi_name ] module_name [ module_name_2 ]
Error: php5enmod pdo returned 1 instead of one of [0]
Error: /Stage[main]/Php/Exec[enable-mcrypt]/returns: change from notrun to 0 failed: php5enmod pdo returned 1 instead of one of [0]
I cannot make heads nor tails of it. It would almost appear that php5enmod is not seeing the argument that's being passed to it, hence the WARNING: usage php5enmod [ -s ALL|sapi_name ] etc...
I say this because if I run phpenmod without any arguments, that's the same error you get.
If anybody has any ideas, I'd be outrageously thankful.
It appears the correct way to do this (As referenced #BMW's comment) is to ensure that Puppet knows where the "find" command is before attempting to execute php5enmod.
My puppet configuration is below:
# Ensure Mcrypt is enabled
exec { "enablemcrypt":
path => [ "/bin/", "/sbin/" , "/usr/bin/", "/usr/sbin/" ],
command => "php5enmod mcrypt",
notify => Service["apache2"],
require => Package["php5-common"],
}
As you can see, by adding "/bin", "/sbin", "/usr/bin" and "/usr/sbin" to the path parameter, puppet can now use the "find" command, which it seems to use internally when executing commands with arguments. php5enmod now runs correctly for me on Ubuntu 14.04 LTS.
Unfortunately, I wasn't able to get this to work as I would have liked. I'm unsure if Puppet is just not playing nicely with php5enmod, or whether there's some internal issues with php5enmod and the way it's being called by the Puppet scripts.
However, I did manage to manually create the symbolic link and restart the service with the following block of code.
file { '/etc/php5/fpm/conf.d/20-mcrypt.ini':
ensure => 'link',
target => '/etc/php5/mods-available/mcrypt.ini',
require => [
Package['php5-mcrypt'],
Package['php5-fpm'],
],
notify => Service['php5-fpm'],
}
Hopefully this helps somebody out in the future.

puphpet install wordpress on ubuntu server

I am wondering if anyone has attempted to automate the deployment of wordpress and puphpet.
I am not to familiar with puphpet but I know it uses the hiera.yaml file along with the manifets and modules folder. I attempted something simple
I added this to config.yaml file and imported the wordpress module from vagrant press
wordpress:
install: '1'
It looks like I may need to add something to the main manifest.pp file that puphpet generates. If anyone has attempted something like this I would appreciate any advice. Or is it better to just use yeoman instead?
Update
I add this to the config.yaml file
wordpress:
install: '1'
Then in the manifest.pp file I added this at the bottom form (wordpress vagrant box) and it seems to work:
# Begin wordpess
if $wordpress_values == undef {
$wordpress_values = hiera('wordpress', false)
if hash_key_equals($wordpress_values, 'install', 1) {
# Download WordPress
exec {"download_wordpress":
command => "wget http://wordpress.org/latest.tar.gz",
cwd => "/tmp",
creates => "/tmp/latest.tar.gz",
path => ["/usr/bin", "/bin", "/usr/local/bin"],
unless => "test -f /var/www/index.php",
}
# Extract WordPress
exec {"extract_wordpress":
command => "tar xzf /tmp/latest.tar.gz",
cwd => "/tmp",
creates => "/tmp/wordpress",
path => ["/usr/bin", "/usr/local/bin", "/bin"],
require => Exec["download_wordpress"],
unless => "test -f /var/www/index.php",
}
# Install WordPress
exec {"install_wordpress":
command => "cp -r /tmp/wordpress/* /var/www/wordpress",
cwd => "/tmp",
path => ["/usr/bin", "/usr/local/bin", "/bin", "/usr/local/sbin", "/usr/sbin", "/sbin"],
require => Exec["extract_wordpress"],
unless => "test -f /home/www/index.php",
}
}
}
PuPHPet uses hiera but not in the traditional Puppet way. You still need to actually create the Puppet code that interacts with the hiera values.

Error! Symfony2 with KNP Snappy Bundle & WKHTML2PDF

When I go to /fileDownload I receive a 500 Internal Server Error - RuntimeException:
The process stopped because of a "0" signal.
Controller Action:
public function fileAction()
{
$html = $this->render('MyBundle:Downloads:file.html.twig', array(
'fileNumber' => '1234'
));
return new Response(
$this->get('knp_snappy.pdf')->getOutputFromHtml($html),
200,
array(
'Content-Type' => 'application/pdf',
'Content-Disposition' => 'attachment; filename="file.pdf"'
)
);
}
I've used terminal commands for WKHTMLTOPDF and it has successfully generated the PDF. It just will not work in Symfony2 app.
In my config.yml:
knp_snappy:
pdf:
enabled: true
binary: /usr/local/bin/wkhtmltopdf
options: []
I'm assuming Symfony is issuing an exec() at some stage. You need to get the exact command line error returned. The fact it works for you in a terminal session doesn't necessarily mean it will work when a different user/process is running it.
Check permissions on wkhtmltopdf that apache or whoever is running your web server has access to run the command.
Also, check this question out wkhtmltopdf: cannot connect to X server and also the first post here: http://geekisland.org/index.php?m=05&y=11&entry=entry110518-114630
X Server is required to run certain builds of wkhtmltopdf and it is not present when running via cron or from within an apache process. If this is the case you need to use the bash wrapper in the first link above.
Be sure that wkhtmltopdf is indeed in the folder you specify, with correct permissions: /usr/local/bin/wkhtmltopdf
I am successfully using KNP Snappy Bundle with Symfony 2.0, try using RenderView instead of render when generating the html (check my code that is working):
Controller:
$html = $this->renderView('YOPYourOwnPoetBundle:thePoet:poemPDF.html.twig', array(
'poem' => $customizedPoem,
'fontType' => $fontType,
'fontSize' => $formData['fontSize'],
'fontWeight' => $fontWeight,
'fontStyle' => $fontStyle,
));
return new Response(
$this->get('knp_snappy.pdf')->getOutputFromHtml($html),
200,
array(
'Content-Type' => 'application/pdf',
'Content-Disposition' => 'attachment; filename="'.$session->get('poemTitle').'.pdf"',
)
);
I had this same issue and i finally solved it, my advice is if you installed the wkhtmltopdf lib from your OS repos then remove it, and download the static version from the google code website, and don't use version 11rc, use the version 9 static lib, it's the one that worked with me.
http://code.google.com/p/wkhtmltopdf/downloads/list
It looks like the selinux is blocking your command, I had the same issue once but I solved it by disabling the SELinux and now I'm able to generate the pdf file using wkhtmltopdf+php.
exec("/usr/local/bin/wkhtmltopdf http://www.google.com /var/www/html/google.pdf");
To disable selinux run setenforce 0,
try again and see if it works.
Re-enable it with setenforce 1
But disabling selinux is not the best solution for this, you need to extend it instead using audit2allow, it will automatically create a custom policy module that will resolve this issue,
First install audit2allo if not already installed
yum -y install policycoreutils-python
grep httpd_t /var/log/audit/audit.log | audit2allow -m httpdlocal > httpd.te
checkmodule -M -m -o httpdlocal.mod httpd.te
semodule_package -o httpdlocal.pp -m httpdlocal.mod
semodule -i httpdlocal.pp
try again and see if it works
I experienced the same issue on same scenario with wkhtmltopdf. And I got rid of with the following command:
setsebool httpd_execmem on

Categories