I'm attempting to run an application on the server, invoking it from PHP using the following code.
$application = "D:\\Program Files (x86)\\ScanBoy\\DM ScanBoy.exe";
exec($application);
Right now the application is 'run' however it crashes instantly. If I just run the application (by double clicking the exe) it runs and everything is fine.
When the application crashes the only error I get is
"{application name} has stopped working. Windows is checking for a
solution to the problem"
I have had this problem with running application via c# backend to a ASP.NET page. The solution there was to set the Working Directory. However in php / exec I am unaware of how to set this option.
Any help please?
You can either:
Use exec("cd myworkdir/ && D:\\Program Files (x86)\\ScanBoy\\DM ScanBoy.exe"); to change the working directory for that exec command (only)
Use the php chdir() function to change the working directory of the php process.
You can find chdir documentation here:
http://php.net/manual/en/function.chdir.php
You can chdir() to change current working directory
Related
I'm using XAMPP on Windows 10, and trying to run a simple PHP script that uses the shell_exec function to call a simple R script in the same directory. Here's the PHP index.php file:
<?php
echo shell_exec('Rscript test.R');
And here's the test.R file:
print("Hello, World!")
From the command line (i.e., Git for Windows), when I run php index.php in the folder, I get the following output:
[1] "Hello, World!"
However, when I run index.php from the web browser via XAMPP and localhost, I don't see anything on the page. At first, I thought it might be a permissions issue, so I gave Full control access to all users on the folder, but it didn't seem to change anything.
Any ideas on how to get this working from the browser? Thank you.
Found the answer to the problem I was having. Apparently, even though Rscript is part of my Windows 10 PATH variable, when I try to execute an R script via shell_exec in PHP, I have to have the whole path to the Rscript executable (I guess because the PHP script / Apache server doesn't know about the path variable).
I changed the shell_exec function call as follows, and it worked:
echo shell_exec('"C:\Program Files\R\R-4.1.0\bin\Rscript" test.R');
Here's a link to the SO post that helped me figure this out:
shell_exec does not execute from browser requests
Is it possible to include global $_lib, $_SETUP; in crontab?
I have a cronjob writting in php file in internet directory (/internet/mycrontab.php), but seem like the crontab throws error when I using $_Lib like in $_lib['db']->db_fetch_object($query).
The $_Lib working fine if I enter the url directly in browser www.myweb.dom/internet/mycrontab.php, and the crontab also working fine if I remove the $_Lib like in $_lib['db']->db_fetch_object($query) by using hardcode sysntax (primary).
If it is possible to include global $_lib, $_SETUP;, how do I correctly do that?
Many thanks for the help.
The problem is that the crontab environment and your web app environment are different things.
The cronjob is run by php-cli while the app is run by apache (or NGINX, whatever) php module.
You should probably evaluate to include your library into the crontab file.
include "/path/to/your/library.php";
$_lib = "whatever";
$_SETUP = "whatever";
Without having a proper look at the code that's the best I can suggest.
I am trying to run a php script on a remote server using ansible.
Running the script with the ansible user (which ansible uses to login to the server) works perfectly. The ansible task however fails when there are include statements in my php script.
My php script lays in /srv/project
it tries to include includes/someLibrary.php
Everything works fine when running the script as any user with the correct access rights but when running it via an ansible task
- name: run script
shell: 'php /srv/project/script.php'
it fails with: failed to open stream: No such file or directory in /srv/project/includes/someLibrary.php
Running a very basic php script works nicely though.
I just found the solution to the problem.
The problem was that when I executed the script by hand, I connected to the server and cd'd into the /srv/project directory before calling php script.php PHPs include in this case looks in the current directory for the files I want to include. When ansible connects to the server it did not change the directory thus producing the no such file or directory error. The solution to this is simple as the shell module takes a chdir as an argument to change the directory to the one specified before running the command.
My ansible task now looks as follows:
- name: run script
shell: 'php /srv/project/script.php'
args:
chdir: '/srv/project'
Thanks everyone for your help!
Ansible runs under a non-interactive ssh session, and thus does not apply user environment settings (eg, .bashrc, .bash_profile). This is typically the cause of different behavior when running interactively vs. not. Check the difference between an interactive printenv and raw: printenv via Ansible, and you'll probably find what needs to be set (via an ansible task/play environment: block) to get things working.
I'm trying to run a ROS shell program on the server through php on Ubuntu 14.04. I have tried using system, exec, shell_exec but nothing happens and I don't get any output. The system call is the following:
echo shell_exec("/opt/ros/indigo/bin/rosrun gazebo_ros spawn_model -database Part_A -gazebo -model Part_A");
What are the limitations of using system or exec to run any shell command through php on a server?
I don't care as much about the output of the command as for its execution. I think the problem has to do with the fact that PHP doesn't have any PATH like shell does so it can't find any applications without specifying the exact location. How can I make PHP se the same PATH shell does?
The problem was that the apache user and the environment in which the bash commands are running are not set up correctly. I followed the instructions from this answer but instead of using "source" I used "." and instead of using the source.bash file I used the source.sh file. I also set all the environment variables that had to do with ros or gazebo using the putenv() function.
I have a Ruby script that's being used to do some API calls/screen scraping, but our main app is in PHP. Our PHP app is using shell_exec() to call the Ruby script.
The ruby script works great when called from the command lineābut it will randomly exits early when called via PHP's shell exec.
Here's an example of the Ruby script:
#!/usr/bin/env ruby
require 'rubygems'
require 'mysql'
require 'net/http'
require 'open-uri'
require 'uri'
require 'cgi'
require 'fileutils'
# Bunch of code here ... works fine
somePath = 'http://foo.com/bar.php'
# Seems to always exit when I do a Net::HTTP or open-uri call
post = Net::HTTP.post_form(URI.parse(somePath),{'id'=>ID,'q'=>'some query'})
data = post.body
# OR
data = open(somePath).read
# More code here ...
So, all I can deduce so far is that it's always exiting when I try to grab/read an external URL via net/http or open-uri calls. The pages I'm grabbing can accept POST or GET requests, but it seems to be exiting either way.
I'm outputting the results with PHP after the shell_exec call, but there are no error messages or exits. I do have messages being output by my Ruby script with "puts ...." here and there. Could that be a problem (I'm thinking 'no' because it doesn't exit with earlier puts messages)?
Again, it works fine when called from the shell. It's almost like the shell_exec call isn't waiting for the net/http call to finish.
Any ideas?
I'm not sure on this, but given your explanation, which sounds plausible, have you looked at all at proc_open:
http://us3.php.net/proc_open
Ruby's open-uri requires tempfile, so I'm guessing there's a file ownership conflict between you running your ruby script and the web server running it. Can the web server create a temp file using tempfile?
Just an FYI, I never really uncovered why this was happening. The best I could deduce was that some type of permission issue was preventing Ruby's open-uri commands from working properly.
I opted for queuing these jobs in a db table and running my ruby script via cron periodically. Everything seems to work fine when the ruby script runs with root/sudo perms.
Run on Linux terminal:
sudo -H -u <user> bash -c <your code> where <user> is the Apache's user.
To find Apache's user you can echo("shell_exec(\"whoami\")"); inside your code and run it on browser. whoami works on Linux and Windows, but if you're under Windows, the Apache default user is your user. You can test it anyway in case it's different, but I can't tell how to run the code on Windows like if it's Apache running it.
After that you can have a clue of what's happening. In most cases the problem is the Apache's root folder is different from operating system's folder. So when you run a command with absolute path, the OS consider / and Apache consider /var/www/html on Linux, /opt/lampp/htdocs on Xampp(Linux) and C:/xampp/htdocs on Xampp(Windows). You get the idea i think.