ubuntu eclipse lampp stack php internal server error - php

I am running Ubuntu 12.04 with the standard lampp stack with the l*calhost at var/www. There is a symbolic link ('git') to my eclipse workspace 'user/Documents/workspace'. I have set permissions for the workspace folders using 'sudo chmod 644 .' and in the httpd.conf in etc/apache2/sites-available I have changed it to read:
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
#
#User daemon
User cjmartin
Group daemon
If I run an html file that does a callback to a php file on my remote server it works, but if I use the same php file in my eclipse workspace I get:
"Failed to load resource: the server responded with a status of 500 (Internal Server Error) http://l*calhost/git/CrystalliseCalculators/CrystalliseCalculators.php?q={%…nd+Wales&searchTerms%5Bmodel_name%5D=Crystallise+model+1.0&_=138651239125"
Normal html files run normally from the worskpace, but not a php call.
I assume this is a problem with the permissions. Any ideas?
The callback code:
...
var path1 = "http://l*calhost/git/CrystalliseCalculators/"
...
function doIt(){
strSearch = "The search terms"
var theCalla = path1+"CrystalliseCalculators.php?q="+JSON.stringify(strSearch);
// Call the Crystallise API to fetch central mortalities.
$.ajax({
url:theCalla,
type:'GET',
dataType:"jsonp",
jsonp:"callback",
data:strSearch,
success:function(dataBack){
//Do stuff with the results....
},
error:function(errorData1){
alert("error msg"+JSON.stringify(errorData1));
}
});
};

This was not a permissions problem for the relevant files and folders as it turned out.
The problem was in a database call inside the php file that failed silently.

Related

how to solve error QXcbConnection: Could not connect to display when using exec function for phantomJs using PHP

I am working on a project where I want to use PHP and Phantomjs together, I have completed my phantomJs script and trying to run it using php exec function. but the function is returning an array of error list.
below I am writing my code of phantomjs and php
dir: /var/www/html/phantom/index.js
var page = require('webpage').create();
var fs = require('fs');
page.open('http://insttaorder.com/', function(status) {
// Get all links to CSS and JS on the page
var links = page.evaluate(function() {
var urls = [];
$("[rel=stylesheet]").each(function(i, css) {
urls.push(css.href);
});
$("script").each(function(i, js) {
if (js.src) {
urls.push(js.src);
}
});
return urls;
});
// Save all links to a file
var url_file = "list.txt";
fs.write(url_file, links.join("\n"), 'w');
// Launch wget program to download all files from the list.txt to current
// folder
require("child_process").execFile("wget", [ "-i", url_file ], null,
function(err, stdout, stderr) {
console.log("execFileSTDOUT:", stdout);
console.log("execFileSTDERR:", stderr);
// After wget finished exit PhantomJS
phantom.exit();
});
});
dir: /var/www/html/phantom/index.php
exec('/usr/bin/phantomjs index.js 2>&1',$output);
echo '<pre>';
print_r($output);
die;
Also tried with
exec('/usr/bin/phantomjs /var/www/html/phantom/index.js 2>&1',$output);
echo '<pre>';
print_r($output);
die;
After runing this i am getting below error
Array
(
[0] => QXcbConnection: Could not connect to display
[1] => PhantomJS has crashed. Please read the bug reporting guide at
[2] => and file a bug report.
[3] => Aborted (core dumped)
)
But if I run index.php file from the terminal like this:
user2#user2-H81M-S:/var/www/html/phantom$ php index.php
then it works fine.I don't know how to solve it. Please help.
i am using following version
system version: Ubuntu 16.04.2 LTS
PHP version: 5.6
phantomJs version: 2.1.1
Did you tried to set an environment variable on your server ? or added it before calling phantomjs ?
I was in the same situation and found some solutions:
a. define or set variable QT_QPA_PLATFORM to offscreen:
QT_QPA_PLATFORM=offscreen /usr/bin/phantomjs index.js
b. or add this line into your .bashrc file (put it at the end):
export QT_QPA_PLATFORM=offscreen
c. or install the package xvfb and call xvfb-run before phantomjs:
xvfb-run /usr/bin/phantomjs index.js
d. or use the parameter platform:
/usr/bin/phantomjs -platform offscreen index.js
Maybe you don't want / can't make modification on your server and in that case you may try to download the static binary from official website then:
/path/to/the/bin/folder/phantomjs index.js
and / or create an alias in your .bash_aliases file like this:
alias phantomjs=/path/to/the/bin/folder/phantomjs
make sure that phantomjs is not installed already on the system if you decide to use the alias.
if the file .bash_aliases not exist already, feel free to create it or add the alias line at the end of the .bashrc file
Some references:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=817277
https://github.com/ariya/phantomjs/issues/14376
https://bugs.launchpad.net/ubuntu/+source/phantomjs/+bug/1586134
I had the same problem running phantomjs on headless Ubuntu 18.04 (on the default Vagrant vm install of openstreetmap-website). Folloiwng Jiab77's links, it seems the Phantomjs team says the problem is the Debian package but the Debian team closed the bug as wontfix. I needed phantomjs to "just work" so it can be called by other programs that expect it to work normally. Specifically, openstreetmap-website has an extensive Ruby test suite with over 40 tests that were failing because of this, and I didn't want to modify all those tests.
Following Jiab77's answer, here's how I made it work:
As root, cp /usr/bin/phantomjs /usr/local/bin/phantomjs
Edit /usr/local/bin/phantomjs and add the line export QT_QPA_PLATFORM=offscreen so it runs before execution. Here is what mine says after doing so:
#!/bin/sh
LD_LIBRARY_PATH="/usr/lib/phantomjs:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH
# 2018-11-13: added the next line so phantomjs can run headless as explained on
# https://stackoverflow.com/questions/49154209/how-to-solve-error-qxcbconnection-could-not-connect-to-display-when-using-exec
export QT_QPA_PLATFORM=offscreen
exec "/usr/lib/phantomjs/phantomjs" "$#"
After this change, phantomjs can be run from the command line without changing anything else, and all the tests that depend on phantomjs were successfully passed.

How to tell Deployer to use different PHP version once ssh'ed to my shared hosting?

I'm experimenting with Deployer to deploy Laravel application into shared hosting (using laravel recipe) from my local ~/Code/project_foo.
The point is that when I'm connected to my shared hosting server via ssh, then default php -v version is 5.6.33. I confirmed that I can change php version on fly by calling php70 -v or even whole path like /usr/local/bin/php70 whatever.
The point is that I don't know how to tell deployer to call commands using php70 which is required, otherwise composer install fails.
So in Terminal I'm inside root of the Laravel project and I simply call:
dep deploy
My deploy.php is messy and very simple but this is just a proof of concept. I'm trying to figure out everything and then I will make it look nicer.
I checked the source code of the laravel recipe, and I saw that there is:
{{bin/php}}
but I don't know how to override the value to match what my hosting tells me to use:
/usr/local/bin/php70
Please, give me any hints how to force the script use different PHP version once connected to the remote host / server.
This is whole script:
<?php
namespace Deployer;
require 'recipe/laravel.php';
//env('bin/php', '/usr/local/bin/php70'); // <- I thought that this will work but it doesn't change anything
// Project name
set('application', 'my_project');
// Project repository
set('repository', 'git#github.com:xxx/xxx.git');
// [Optional] Allocate tty for git clone. Default value is false.
set('git_tty', true);
// Shared files/dirs between deploys
add('shared_files', []);
add('shared_dirs', []);
// Writable dirs by web server
add('writable_dirs', []);
// Hosts
host('xxx')
->user('xxx')
->set('deploy_path', '/home/slickpl/projects/xxx');
// Tasks
task('build', function () {
run('cd {{release_path}} && build');
});
// [Optional] if deploy fails automatically unlock.
after('deploy:failed', 'deploy:unlock');
// Migrate database before symlink new release.
before('deploy:symlink', 'artisan:migrate');
OK, I've found the solution.
I added (after require):
set('bin/php', function () {
return '/usr/local/bin/php70';
});
For anybody who searches for changing Composer's PHP version:
set('bin/composer', function () {
return '/usr/bin/php7.4 /usr/local/bin/composer';
});
There is function locateBinaryPath()
so result is:
set('bin/php', function () {
return locateBinaryPath('php7.4');
});
first found php path and composer path use this
for more information Setting PHP versions in Deployer deployments
find / -type f -name "php" 2>&1 | grep -v "Permission denied"
find / -type f -name "composer" 2>&1 | grep -v "Permission denied"
then
set('bin/composer',
function () {
return 'php_path composer_path';
});
like this
set('bin/composer',
function () {
return '/opt/remi/php73/root/usr/bin/php /usr/bin/composer';
});

Deployer - no tty present and no askpass program specified - How to deploy with Deployer

I have trouble deploying with Deployer 4.0.2 and I am in need for help of somebody more experienced than me in this.
I want to deploy a repository of mine to a Ubuntu 16.04 server.
I am using laravel homestead as a development environment, where I also installed deployer. From there I ssh into my remote server.
I was able to deploy my code with the root user, until I hit a RuntimeExceptionthat aborted my deployment.
Do not run Composer as root/super user! See https://getcomposer.org/root for details
That made me create another user called george, whom I gave superuser rights. I copied my public key from my local machine to a newly generated ~/.ssh/authorized_keys file, that gave me permission to access the server via ssh.
Yet when I run dep deploy with the new user:
server('production', '138.68.99.157')
->user('george')
->identityFile()
->set('deploy_path', '/var/www/test');
I get another RuntimeException:
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Now it looks like the new user george cannot access the ~/.ssh/id_rsa.pubkey. So I copy them from the root folder into my home folder and also add the public key in the Github SSH settings.
cp root/.ssh/id_rsa.pub home/george/.ssh/id_rsa.pub
cp root/.ssh/id_rsa home/george/.ssh/id_rsa
Only to get the same error as before.
In the end I had to add github to my list of authorized hosts:
ssh-keyscan -H github.com >> ~/.ssh/known_hosts
Only to get the next RuntimeException
[RuntimeException]
sudo: no tty present and no askpass program specified
I managed to comment this code in the deploy.php
// desc('Restart PHP-FPM service');
// task('php-fpm:restart', function () {
// // The user must have rights for restart service
// // /etc/sudoers: username ALL=NOPASSWD:/bin/systemctl restart php-fpm.service
// run('sudo systemctl restart php-fpm.service');
// });
// after('deploy:symlink', 'php-fpm:restart');
to get the deployment process finally done, and now I ask myself, if the restart of php-fpm is really necessary, for me to continue debugging this deployment tool? Or can I live without it?
And if I need it, can somebody help me understand what I need it for? And maybe as a luxury also provide the solution to the RuntimeException?
Try this:
->identityFile('~/.ssh/id_rsa.pub', '~/.ssh/id_rsa', 'pass phrase')
It works great for me - no need for an askpass program.
It helps to be explicit in my experience.
As for your phpfm restart task .. I haven't seen that before. Shouldn't be needed. :)
EDIT:
That you provide a password is probably a good sign that you ought to refactor your Deployer code a bit if you keep it under source control.
I am loading site specific data from a YAML file - which I am not submitting to source control.
The first bit of my stage.yml :
# Site Configuration
# -------------
prod_1:
host: hostname
user: username
identity_file:
public_key: /home/user/.ssh/key.pub
private_key: /home/user/.ssh/key
password: "password"
stage: production
repository: https://github.com/user/repository.git
deploy_path: /var/www
app:
debug: false
stage: 'prod'
And then, in my deploy.php :
if (!file_exists (__DIR__ . '/deployer/stage/servers.yml')) {
die('Please create "' . __DIR__ . '/deployer/stage/servers.yml" before continuing.' . "\n");
}
serverList(__DIR__ . '/deployer/stage/servers.yml');
set('repository', '{{repository}}');
set('default_stage', 'production');
Notice that, when you use serverList, it replaces your server setup in deploy.php

AWS Elastic Beanstalk Deployment Order

I'm deploying code to a single-instance web server AWS EB environment that will provision/update my connected RDS database. I've got an .ebextensions file that calls deployment code:
---
container_commands:
01deploydb:
command: /var/www/html/php/cli/deploy-db.php
leader_only: true
On the same deployment, I dropped the deploy-db.php file back one directory into /cli/. On deployment, I get ERROR: [Instance: i-*****] Command failed on instance. Return code: 127 Output: /bin/sh: /var/www/html/php/cli/deploy-db.php: No such file or directory.
container_command 01deploydb in .ebextensions/01_db.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
If I deploy a version that does not include the command, then deploy a second update including the command, there is no error. However, adding the command and the file it calls at the same time produces the error. A similar sequence occurred earlier with a different command/file.
My question is: is there a documented order/sequence for how AWS updates the environment? I would have expected that my new version would have fully deployed (and the .php file installed) before container_commands are called.
The commands: section runs before the project files are put in place. This is where you can install server packages for example.
The container_commands: section runs in a staging directory before the files are put in its final destination. Here you can modify your files if you need to. Current path is this staging directory so you can run it like this (I might get the app directory wrong, maybe it should be php/cli/deploy-db.php)
container_commands:
01deploydb:
command: cli/deploy-db.php
leader_only: true
Reference for above: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
You can also run a post deploy scripts. This is not very well documented (at least it wasn't). You can do something like this (it won't be leader only though, but you could put a file in this directory through a container_commands:):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
/var/www/html/php/cli/deploy-db.php

How to parse index.php in server root by default in HipHop / HHVM (and fix the "404 File Not Found" error)?

Reproduceable problem description:
When installing HipHop / HHVM via the official way [1][2], and then running the built-in server [3] from /var/www via
cd /var/www
sudo hhvm -m server
it will render a custom "404 File Not Found" message to the browser, regardless of /var/www's contents when moving to the server's root:
http://111.111.111.111/
However, HipHop will run perfectly when a filename is given, like
http://111.111.111.111/index.php
Filling the index.php with phpinfo() will also show "hiphop" as feedback, indicating that this PHP file is correctly parsed by HipHop.
Question:
How to let HipHop's server run index.php (etc.) by default when navigating to the server's root, like Nginx and Apache do ?
Update:
Seems to be a common issue: [4], [5]
According to the documentation, the config.hdf file has a DefaultDocument directive. Set that.
For HHVM 3.0 you specify it in a ini config file with this:
hhvm.server.default_document = index.php

Categories