Crontab on AWS EC2 and $_SERVER PHP variables - php

I have some PHP scripts for database maintenance in my server that requires its periodically execution. Obviously the easiest solution is to schedule its running with system cron.
The scripts require some server variables accessed from $_SERVER, like database hostname, cron parameters, etc.
I can run the scheduled cron commands from command line without any problem, and everything seems to be working fine (calling something like php filename.php). However, when the same commands are executed from cron, the scripts fails and the error reported is like the following:
PHP Notice: Undefined index: RDS_DATABASE in
/var/app/current/app/xx/Db/ConnectionFactory.php on line 8 PHP
Seems that the $_SERVER variable is not correctly initialized when running from cron, but it works from command line. I have tried with crontab -u ec2-user -e but without luck.
I do not want to use wget to run the script as it adds some overhead, and the scripts are hidden from being accessed from HTTP.
Any hint about successfully accessing $_SERVER from command line, but failing when running from crontab?

Had the same issue. Found a solution:
echo "InstanceID: ".get_cfg_var('INSTANCE_ID')."\n";
For some reason it was working fine on my ec2 user but not as a root cron job. Using the function instead of accessing the $_SERVER array solved my problem.

$_SERVER only works if you will run PHP using any web server. If you will use crontab and execute PHP via command line it will not work. You may refer to PHP documentation http://php.net/manual/en/reserved.variables.server.php#refsect1-reserved.variables.server-indices

As #Baminc and #Ankur says the solution is to use get_cfg_var function to get the information because $_SERVER only works when you access it from a web browser.
What I do is the following for example with SERVER_NAME :
if (isset($_SERVER['SERVER_NAME'])) {
$myServerName = $_SERVER['SERVER_NAME'];
} else {
$myServerName = get_cfg_var('SERVER_NAME');
}
Hope this helps!

Related

Cpanel Cron Job Php Global Variable

The cron job process works, but it doesn't read global variables like $ _SERVER in php.
Cron Job Code:
/usr/local/bin/ea-php72 -q /home/userName/public_html/folderName/folderName2/phpFile.php
PHP Code:
print_r($_SERVER['DOCUMENT_ROOT']);
How do we get it to read these global variables?
For document_root it's normal. You run PHP in command line, so you not used a webserver so you don't have a document_root.
So PHP can't give you this information. Other entries of $_SERVER was not gived when run PHP in command line.
There is no server, so $_SERVER is not set.
You are running the script directly as cron cron (as opposed to from a web server accessed by an HTTP request triggered by a cronjob ), then of course it doesn't work.

Why am i getting Undefined index: HTTP_HOST error?

I am using Facebook SDK to post some test wall post on my own facebook page. It works fine when i run the script on my browser but when i run it from terminal it gives me as error as below, i don't know what's wrong please help. I want to post on my facebook page using php CRON scripts like every 6 hours.
Undefined index: HTTP_HOST error in Facebook/src/base_facebook.php
The cron executes the PHP not like a module of apache, so many environment variables are not set by the server. When executing from cron your PHP script is like GCI one, more precisely its CLI (command line interface - php-cli). So as you can imagine, there is no web server and there is no HTTP_HOST.
PS: You can transfer data (urls, hostname or whatever you like) as command line arguments (environment variables) to PHP: Command line usage
Addition:
$php -f cronjob.php HTTP_HOST=www.mysite.com #example
<?php
// cronjob.php
$host = $_GET['HTTP_HOST']; // Get the host via GET params
?>
If you run your script from a terminal, or a cron job, there is no HTTP environment.
A possible solution to this is to run the script with a wget http://.../parameters instead of with php scriptname.

PHP shell_exec() issue

I am having an issue using the PHP function shell_exec().
I have an application which I can run from the linux command line perfectly fine. The application takes several hours to run, so I am trying to spawn a new instance using shell_exec() to manage better. However, when I run the exact same command (which works on the command line) through shell_exec(), it returns an empty string, and it doesn't look like any new processes were started. Plus it completes almost instantly. shell_exec() is suppose to wait until the command has finished correct?
I have also tried variations of exec() with the same outcome.
Does anyone have any idea what could be going on here?
There are no symbolic links or anything funky in the command: just the path to the application and a few command line parametes.
Some thing with you env
See output of env from cli (command line interface) and php script
Also see what your shell interpreter?
And does script and cli application runs from one user?
If so, se option safe_mode
Make sure the user apache is running on (probably www-data) has access to the files and that they are executable (ls -la). A simple chmod 777 [filename] would fix that.
By default PHP will timeout after 30 sec. You can disable the limit like this:
<?php
set_time_limit(0);
?>
Edit:
Also consider this: http://www.rabbitmq.com/

Can't run shell script from php web script

I am trying to run a shell script from a php script.
I have complete control of the environment (unix on mac), I should have all the permissions, etc. set correctly.
The web script is in /htdocs/
The shell script can be executed from anywhere so when I go to /htdocs/ in the shell, I can easily run it like this:
$ my_shellscript
.. but when my php script (which is located in htdocs) tries to call it:
shell_exec('my_shellscript');
I get nothing.
I have proven the script can be called from that location and I have temporarily granted full access to try to get it working somehow. I am going crazy, please help.
If you know of some other way of triggering a shell script via the web that would be fine.
Thanks in advance.
well i got few weeks same problem, the solution is to check if the apace has the permission to execute your script. You could also try to run the script in php cli.
Since it is a shellscript, it needs to be invoked with the path prefix. My guess is you need to do this:
shell_exec('./my_shellscript');
First thing: make sure php isn't running in Safe Mode
Next thing: Try running it with the exec() function and using the full path (e.g. /var/www/htdocs/my_shellscript)
Try doing
echo shell_exec('my_shellscript 2>&1');
which will capture the script's stderr output and print it out. If something inside the script is failing, this output would otherwise be lost when not being run interactively.

Unexpected behavior when calling a Ruby script via PHP's shell_exec()

I have a Ruby script that's being used to do some API calls/screen scraping, but our main app is in PHP. Our PHP app is using shell_exec() to call the Ruby script.
The ruby script works great when called from the command lineā€“but it will randomly exits early when called via PHP's shell exec.
Here's an example of the Ruby script:
#!/usr/bin/env ruby
require 'rubygems'
require 'mysql'
require 'net/http'
require 'open-uri'
require 'uri'
require 'cgi'
require 'fileutils'
# Bunch of code here ... works fine
somePath = 'http://foo.com/bar.php'
# Seems to always exit when I do a Net::HTTP or open-uri call
post = Net::HTTP.post_form(URI.parse(somePath),{'id'=>ID,'q'=>'some query'})
data = post.body
# OR
data = open(somePath).read
# More code here ...
So, all I can deduce so far is that it's always exiting when I try to grab/read an external URL via net/http or open-uri calls. The pages I'm grabbing can accept POST or GET requests, but it seems to be exiting either way.
I'm outputting the results with PHP after the shell_exec call, but there are no error messages or exits. I do have messages being output by my Ruby script with "puts ...." here and there. Could that be a problem (I'm thinking 'no' because it doesn't exit with earlier puts messages)?
Again, it works fine when called from the shell. It's almost like the shell_exec call isn't waiting for the net/http call to finish.
Any ideas?
I'm not sure on this, but given your explanation, which sounds plausible, have you looked at all at proc_open:
http://us3.php.net/proc_open
Ruby's open-uri requires tempfile, so I'm guessing there's a file ownership conflict between you running your ruby script and the web server running it. Can the web server create a temp file using tempfile?
Just an FYI, I never really uncovered why this was happening. The best I could deduce was that some type of permission issue was preventing Ruby's open-uri commands from working properly.
I opted for queuing these jobs in a db table and running my ruby script via cron periodically. Everything seems to work fine when the ruby script runs with root/sudo perms.
Run on Linux terminal:
sudo -H -u <user> bash -c <your code> where <user> is the Apache's user.
To find Apache's user you can echo("shell_exec(\"whoami\")"); inside your code and run it on browser. whoami works on Linux and Windows, but if you're under Windows, the Apache default user is your user. You can test it anyway in case it's different, but I can't tell how to run the code on Windows like if it's Apache running it.
After that you can have a clue of what's happening. In most cases the problem is the Apache's root folder is different from operating system's folder. So when you run a command with absolute path, the OS consider / and Apache consider /var/www/html on Linux, /opt/lampp/htdocs on Xampp(Linux) and C:/xampp/htdocs on Xampp(Windows). You get the idea i think.

Categories