heroku local is not working - php

I'm trying to use Heroku local but it's not working as can be seen below. command prompt says 'vendor' is not recognized as an internal or external command...
C:\Users\owner\Desktop\php-getting-started> heroku local
[OKAY] Loaded ENV .env File as KEY=VALUE Format
15:23:09 web.1 | 'vendor' is not recognized as an internal or external command,
15:23:09 web.1 | operable program or batch file.
[DONE] Killing all processes with signal null
15:23:09 web.1 Exited with exit code 1
When I tried the first time, I got message saying like
"No .env file found"
so I added .env file but I still cannot run locally.
Any idea why this is happening?

I've had the same issue today but managed to solve it by updating my Procfile from
web: vendor/bin/heroku-php-apache2 web/
to
web: vendor\bin\heroku-php-apache2 web\
Hopefully this solves your issue too.

This worked for me
create a file Procfile.windows in the same location where Procfile
file is.
add this command in Procfile.windows - web: php -S localhost:8000 -t web/
in terminal run heroku local -f Procfile.windows
visit http://localhost:8000/

Related

Starting Docker PHP server using CMD makes the host recieve "Connection reset by peer" upon connecting

I have a PHP server that I need to launch in a docker image along a Python service. Both of them need to be in the same image. At first, I wrote the Dockerfile to start the PHP server, by following a simple guide I found online, and I came up with this:
FROM php:7-apache
COPY ./www/ /var/www/html
WORKDIR /var/www/html
EXPOSE 70
Then, because I need a third service running on a second container, I created the following docker-compose file:
version: '3.3'
services:
web:
build: .
image: my-web
ports:
- "70:80"
secondary-service:
image: my-service
ports:
- "8888:8888"
Using only that, the website works just fine (except for the missing service on the web container). However, if I want to start a service inside the web container alongside the web, I need to start the website manually from a bash script, since docker can only have one CMD entry. This is what I tried:
FROM php:7-apache
COPY ./www/ /var/www/html
RUN mkdir "/other_service"
COPY ./other_service /other_service
RUN apt-get update && bash /other_service/install_dependenci172.17.0.1es.sh
WORKDIR /var/www/html
EXPOSE 70
CMD ["bash", "/var/www/html/launch.sh"]
And this is launch.sh:
#!/bin/bash
(cd /other_service && python3 /other_service/start.py &) # CWD needs to be /other_service/
php -S 0.0.0.0:70 -t /var/www/html
And that also starts the server without problems, along with other_service.
However, when I go to my browser (in the host) and browse to http://localhost:70, I get the error "Connection reset". The same happens when I try to do a request using curl localhost:70, which results in curl: (56) Recv failure: Connection reset by peer.
I can see in the log of the web that the php test server is running:
PHP 7.4.30 Development Server (http://0.0.0.0:70) started
And if I open a shell inside the container and I run the curl command inside of it, it gets the webpage without any problems.
I have been searching similar questions around, but none if them had an answer, and the ones that did didn't work.
What is going on? Shouldn't manually starting the server from a bash script work just fine?
Edit: I've just tried to only start the PHP server like below and it doesn't let me connect to the webpage either
#!/bin/bash
#(cd /other_service && python3 /other_service/start.py &) # CWD needs to be /other_service/
php -S 0.0.0.0:70 -t /var/www/html
I found the issue. It was as easy as starting the Apache server too:
#!/bin/bash
(cd /other_service && python3 /other_service/start.py &) # CWD needs to be /other_service/
/etc/init.d/apache2 start
php -S 0.0.0.0:70 -t /var/www/html

Plesk bin site error when creating a domain from bash script

Im having an issue creating a domain on plesk, this command works fine when run through cli:
plesk bin site --create newdomain.com -webspace-name existingdomain.com -www-root /httpdocs
But when I add the same code above to a bash script (create_domain.sh) and try to run it with:
sudo bash create_domain.sh
I get the following error
An error occurred during domain creation: hosting update is failed: cObject->update() failed: Some fields are empty or contain an improper value. ('www_root' = ')
Anyone knows why this is happening?
This was because I was using windows to edit the bash file, I deleted it and rewrote it in nano and it worked

Continuously running laravel-echo-server with Supervisor

I have installed & configured Supervisor & Laravel-Echo-Server and i have set up a program that is supposed to continuously run a laravel-echo-server and it looks like this:
[program:laravel-echo-server]
directory=/var/www/html/laravel
command=/root/.nvm/versions/node/v10.13.0/bin/laravel-echo-server start
autostart=true
autorestart=true
redirect_stderr=true
user=root
stdout_logfile=/var/log/laravel-echo-server.log
The command line error i'm getting is: laravel-echo-server: ERROR (spawn error)
The error in the log file is: /usr/bin/env: ^^xnode ^^y: No such file or directory
What i've tried so far is:
Checked if laravel-echo-server is installed globally with npm list -g laravel-echo-server (it is).
Defined absolute path to the laravel-echo-server that looks like this:
command=/root/.nvm/versions/node/v10.13.0/bin/laravel-echo-server start --dir /var/www/html/laravel
Created a sym link for the laravel-echo-server in usr/bin, and i placed
laravel-echo-server.json files both in the sym link & in the absolute path (for testing purposes, to see if i can start the server from there - I can), again redefined the command in the program to command=laravel-echo-server start, nothing works and i'm out of ideas.
Can someone help me out with what i'm doing wrong?
P.S. Again for testing purposes, i've set up PM2 and when i run the laravel-server-echo thru it it says that it's online but it's really not, so i'm assuming that it probably encounters similar error.
I've found the solution to my problem:
ln -s /root/.nvm/versions/node/v10.13.0/bin/node /usr/bin/node
From what i understand, this is bug for Node on Debian.

Copy remote file with rsync in php

I'm trying to execute with PHP a command (rsync) to copy folders and files from a remote server to a local folder.
This is the code I wrote in php. Command WORKS in SSH (local Terminal and remote with putty.exe), copying correctly the folders and the files.
But it doesn't work in PHP. What can I do? Do you know a better(secure/optimal) way to do this?
exec("echo superuserpassword | sudo -S sshpass -p 'sshremoteserverpassword' rsync -rvogp --chmod=ugo=rwX --chown=ftpuser:ftpuser -e ssh remoteserveruser#remoteserver.com:/path/files/folder /opt/lampp/htdocs/dowloadedfiles/", $output, $exit_code);
EDIT:
I had read this guide to create a link between my server and my local machine.
Now I can login with ssh in my remote machine without password.
I changed my command:
rsync -crahvP --chmod=ugo=rwX --chown=ftpuser:ftpuser remote.com:/path/to/remote/files /path/to/local/files/
This command works too in terminal, but when I send it with exec php command, it fails again, but I got another different error: 127.
As MarcoS told in his answer, I checked the error_log.
The messages are this:
ssh: relocation error: ssh: symbol EVP_des_cbc, version OPENSSL_1.0.0 not defined in file libcrypto.so.1.0.0 with link time reference
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: remote command not found (code 127) at io.c(226) [Receiver=3.1.1]
Well, after lot of try/error, I finished to cut the problem in the root:
I readed this guide (like the last one, but better explained) and I changed the php file that execute the rsync command to the remote server (where files are located) and run the rsync.php file there, and it worked perfectly.
To execute in the machine with the files (the files to copy and the rsync.php)
1.- ssh-keygen generates keys
ssh-keygen
Enter an empty passphrase and repeat empty passphrase again.
2.- ssh-copy-id copies public key to remote host
ssh-copy-id -i ~/.ssh/id_rsa.pub remoteserveraddressip(xxx.xxx.xxx.xxx)
The rsync.php file:
exec("rsync -crahvP /path/in/local/files/foldertocopy remoteuser#remoteserveraddress:/path/in/remote/destinationfolder/", $output, $exit_code);
After all of that, navigate to the rsync.php file and all must work. At least worked for me...
I suppose you are experiencing identity problems... :-)
On a cli, you are running the command as the logged-in user.
On PHP, you are running the command as the user your web server runs as (for example, apache often runs as www-data, or apache user...).
One possible solution I see (if the above is the problem real cause), is to add your user to web-server group...
I'd also suggest you to check the web-server error logs, to be sure about the real cause of the problem... :-)

Using apache bench on local server

I want testing my site load (site is write on php) via apache bench.
I have local server (xampp), OC: windows.
In directory apache/bench there is file ab.exe, this means that apachebench is installed in my local server yes?
I have local site localhost/my_test, I want simulate concurrent 1000 request on this site, in CMD I write this command:
ab -c 1000 localhost/my_test
answer from CMD is: 'ab' is not recognized as an internal or external command,
operable program or batch file.
Tell please, where I wrong?
AB needs a complete URL:
Usage: ab [options] [http://]hostname[:port]/path
So, in your case the URL should look like:
localhost/my_test/
It needs the path - which in this case is simply the /
Hope this helps
Paul.
ab -c 1000 localhost/my_test
answer from CMD is: 'ab' is not recognized as an internal or external command, operable program or batch file.
It means that ab.exe not in your PATH.
If you start CMD, first you should navigate yourself into the apache/bench directory, and run the command from that folder.
I think you should type your port that your server has been running on;
For example, I use http://127.0.0.1:8080 (port is 8080 as I set in my XAMPP config)

Categories