Strange behaviour Laravel Homestead Database Connection - php

I am experiencing a peculiar error working with Laravel, Homestead and MySQL. This is the part of my .env file related to the database:
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=33060
DB_DATABASE=mydatabasename
DB_USERNAME=homestead
DB_PASSWORD=secret
If I set the port to 3306 I can access the tables from my application but I cannot execute commands from Terminal such as php artisan migrate. If I set the port to 33060 I can execute commands from Terminal but I cannot access the tables from my application.

Your application is running on the IP provided in Homestead.yaml so when localhost is relative to your application port 3306 works. When running artisan while not SSH into your vagrant vm you are running the command relative to the localhost of your machine, not the vm, so your'e trying to run the migrations against a machine with no database.
The reason that port 33060 works for your local machine is because Homestead by default forwards this port to your vagrant virtual machines port 3306. But due to your .env now specifying port 33060 the vagrant virtual machine now can not reach port 3306.
Leave the DB_HOST set to 127.0.0.1 with port 3306 and SSH into your vagrant vm via the vagrant ssh command to run your migration command.
Or alternatively you could have multiple .env files for your various environments

Related

Remote docker http request

I have 2 computers running on the same network as following
Macbook Pro M1 (Monterey)
Acts as Client (Guest)
Docker version 20.10.12, build e91ed57
Docker Compose version v2.2.3
Lenovo (Ubuntu 21.04)
Local IP: 192.168.0.63
Acts like Host (host all my docker containers)
Docker version 20.10.7, build 20.10.7-0ubuntu5~21.04.2
docker-compose version 1.29.2, build 5becea4c
Because docker is super slow on Mac, and i am a Mac guy, i wanted to host my containers on Ubuntu and make all development and the http api request from Mac.
So on Ubuntu: i setup all my docker containers
And on Mac: i set: export DOCKER_HOST=192.168.0.63
When i run on my Mac:
docker ps : I can successfully list the the container from Ubuntu.
docker context ls : I get Current DOCKER_HOST based configuration tcp://192.168.0.63:2375
But when i try to make http request e.g
http://ucp.mydomain.localhost/api/v1/users/login
it failed
Letting you know that i have many container and all each has a vhost.
127.0.0.1 ucp.mydomain.localhost
127.0.0.1 checkout.mydomain.localhost
127.0.0.1 merchant.mydomain2.localhost
127.0.0.1 admin.mydomain3.localhost
127.0.0.1 core.mydomain3.localhost

How to reverse SSH tunnel to remote docker container for Xdebug?

There are many posts on SO and elsewhere on how to set this up. So far I've been unsuccessful in getting it working.
Setup
Local machine - Windows 10, with Cygwin, git bash, and WSL2 with Ubuntu installed; and MacBook Air (Mojave)
Host machine - AWS EC2 instance running Amazon Linux 2
Docker container - CentOS 7.8 running PHP with Xdebug
Goal
Remotely debug PHP code in container from local machine by utilizing a reverse tunnel from the local machine to the container.
I have gotten this working before when the PHP code was installed locally on the host machine, so the question is not around Xdebug. As soon as I moved the PHP code into the container, debugging no longer works.
What I've tried
Setting up a reverse tunnel from the local machine to the host EC2 instance works. For this I'm doing ssh -vvv -i "aws.pem" -R 9000:localhost:9000 user#ec2instance in terminal, cygwin, or git bash and testing with nc -z localhost 9000 || echo 'no tunnel open' on the host machine.
When I docker exec -it container bash into the container and run nc, the tunnel is not available.
I'm using docker-compose:
version: '2'
services:
web:
image: 'privateregistry/project/container:latest'
restart: always
container_name: web
ports:
- '8082:80'
- '447:443'
- '9000:9000'
volumes:
- '.:/var/www/project'
I have tried with and without mapping the 9000 port. I have tried variations of the ssh tunnel:
ssh -vvv -i "aws.pem" -R :9000:localhost:9000 user#ec2instance
ssh -vvv -i "aws.pem" -R 0.0.0.0:9000:localhost:9000 user#ec2instance
ssh -vvv -i "aws.pem" -R \*:9000:localhost:9000 user#ec2instance
ssh -vvv -i "aws.pem" -R 9000:172.20.0.2:9000 user#ec2instance (container IP)
I've also tried using ssh -L with no luck.
Several posts, like this one suggest adding GatewayPorts yes on the host machine. I've tried this as well with no change.
I have not tried using --network=host, primarily due to security concerns. I also would rather not use ngrok, as I'd like to be able to use localhost or host.docker.internal for the xdebug.remote_host setting.
For completeness, here is what I have for Xdebug:
[XDebug]
xdebug.remote_enable=1
xdebug.remote_autostart=1
xdebug.remote_handler="dbgp"
xdebug.remote_port=9000
xdebug.remote_host="host.docker.internal"
;xdebug.remote_connect_back=1
xdebug.idekey = VSCODE
xdebug.remote_log = "/var/log/xdebug.log"
I got this working. After reading up on the ssh man page and looking over things again, I realized I was binding to the docker container IP not the bridge (docker0) IP.
I updated my connect command to ssh -vvv -i "aws.pem" -R 9000:172.17.0.1:9000 user#ec2instance with the right IP and the tunnel started working. I do still have GatewayPorts enabled (per the man page) and removed the 9000:9000 mapping.
I then updated my xdebug.remote_host value to the same IP and debugging is now working. Not sure why host.docker.internal didn't work, but that's for another day.

MariaDB remote access

I am using laravel 5.4 and on remote server ubuntu 16.04 with latest mariaDB,
I made almost all the configuration listed in google and also i revert to the original state.
My current problem is that i cannot connect to my remote mariadb. Credentials are absolutely fine.
I changed the /etc/mysql/mariadb.conf.d/50-server.cnf file for the bind-address where i gave the ip of the server, The ip where i can access my phpmyadmin like 1.1.1.1/phpmyadmin. Current .env setting is :
DB_CONNECTION=mysql
DB_HOST=1.1.1.1
DB_PORT=3306
DB_DATABASE=*******
DB_USERNAME=root
DB_PASSWORD=********
But the error it throws SQLSTATE[HY000] [2002] No connection could be made because the target machine actively refused it..
Check your AppArmor profiles. It may be blocking the mysqld process from accessing network.
$ sudo apparmor_status
Make sure mysqld is not in any "enforce" mode.
Check to see if there is a mysqld profile in /etc/apparmor.d/
$ ls /etc/apparmor.d/*mysqld*
To disable it, put a simlink in /etc/apparmor.d/disable/
$ sudo ln -s /etc/apparmor.d/<full_name_of_mysqld_file> /etc/apparmor.d/disable/
followed by
$ sudo apparmor_parser -R /etc/apparmor.d/<full_name_of_mysqld_file>
Verify that mysqld protection is disabled:
$ sudo aa-status

Error starting userland proxy: listen tcp 0.0.0.0:3306: bind: address already in use

I have to make Laravel app and to deliver a Dockerfile, but I'm really stuck with this. Before that I had a nightmare wile installing laravel on my machine.
I'm trying to get dockervel image and following the steps here:
http://www.spiralout.eu/2015/12/dockervel-laravel-development.html
But when I run dartisan make:auth it gives this error below:
**ERROR:** for dockervel_mysql_1 **Cannot restart container** c258b418c03cbd6ec02c349c12cf09403f0eaf42fa9248019af7860d037d6474: **driver failed programming external connectivity on endpoint dockervel_mysql_1** (da3dd576458aa1fe3af7b539c48b9d61d97432cf5e9ee02d78562851f53981ae): E**rror starting userland proxy: listen tcp0.0.0.0:3306: bind: address already in use.**
I have tried to Change the default port in the docker-compose.yml
ports:
- "8084:80"
Still nothing, also tried to stop apache2 (service apache2 stop) on my machine ,also tried docker-compose restart and removing docker container dockervel_mysql_1.
I have to mention that I have already one Laravel proj. in /var/www/laravel.
Please help!
I had the same problem and
sudo netstat -nlpt |grep 3306
showed me the PID and which service it was started by (mysgld). Whenever I tried to kill the PID then it was started again. But the problem was fixed when I stopped the service by
sudo service mysql stop
Notice that you will have to use mysql and not mysqld.
I hope that this will do it for you - I was able to run docker-compose up without any problems
Try to kill all the processes using the port 3306:
sudo kill `sudo lsof -t -i:3306`
Then, run your docker containers:
sudo docker-compose up
Probably you have already a MySQL service running in port 3306. You should close it first.
Then try to end docker-compose down and restart it with docker-compose up.
Remember also to change the permissions after you add a file in your project (like dartisan make:auth) with dpermit
UPDATE:
since you have changed the port to "8084" you should go to localhost:8084
If you see the apache default then you probably are browsing another server since dockervel is build upon nginx.
You have also probably have some gaps on Docker. Don't mix your local storage with docker storage. /var/www in a container is different than your local /var/www. in docker-compose.yml you mount the local ~/dockervel/www to containers /var/www.
I would suggest that you start all over again and revert the changes you've made to your apache server. Shut it down, you don't need it. Dockervel will provide you with an NginX server in a container.
My fix for this issue was to go into
docker-compose.yml
and change
ports: -3306:3306 to ports: -3307:3306
then run this command again:
docker-compose up
On Ubuntu, running this command will stop your mysql from running for your docker container to work:
sudo service mysql stop
Then, if your apache2 is running, you need to stop the service especially when you want to work with nginx:
sudo service apache2 stop
Then, you can run your docker-compose up -d ... command
So for me when I was trying to load and run MySQL image in docker container, I was getting the same error:
And even after stopping local mysql server in system preferences didn't help:
Cause the port 3306 was used by my tomcat server, so basically you have to make sure the port (in this case 3306) that the docker command wants to use should not be in use by any other service otherwise the command will fail
First Solution :
sudo service mysql stop
and then run
docker-compose up
to start the application , It is an quick and fast solution to run this mysql stop command to stop the current MySQL service on port 3306 port so the same port can be available for your docker application.
Scenario 1 : Problem will come when you want to run both application one which was running previously and next you want to run now at this time it won't work.
Second Solution to Scenario 1:
If your next/current application coming from docker then try below it will work without disturbing first application mysql service of 3306 port
Open and changed the MySQL port from docker-compose.yml file
Default configuration
ports:
- ${SERVER_PORT_DB:-3306}:3306
changed port
ports:
- ${SERVER_PORT_DB:-3307}:3306
and now run below command to start the application
docker-composer up
The error you are seeing is from a local mysql instance listening on port 3306 (currently on pid 1370 from your comments). You won't be able to run a container that publishes on this host port while it's already in use by another process. Solutions are to either stop mysql on the local host, or to change/remove the published port in your container. If the port is only needed by other containers, you can leave it unpublished and they can communicate directly on the docker private network (by default, this is "bridge").
This option did it for me:
sudo pkill mysql
You need to change the mysql port
because you are installing mysql on your machine and it takes the default port 3306
and now you are trying to make dockervel_mysql_1 run to the same port 3306 , this is why you see in the error "Address already in use"
so if you change dockervel_mysql_1 port to 3307 for example it will works fine , without stopping the mysql that is running on you machine
Running this command fixed the issue for me:
docker swarm leave --force
Explanation:
I had started docker swarm service as a master node in my localhost.
Swarm was taking network priority and making use of this ports already
If tomcat is running on your machine which is connected to MySQL 3306 port then check by killing the tomcat first and then trying to do docker-compose up.
I used two different versions of MySQL, MySQL 5 on my local machine and 8 on docker. So when your connected to mysql5 on 3306 in tomcat and if you just stop the mysql5, then the process isn't completed yet since tomcat is still connected to 3306. Kill the tomcat and then up the docker it should work.
Happy coding!!
I know this question has been quite old but someone still looking for answers. You don't have to run any kill command instead you can use docker --remove-orphans flag and it will clean it up for you. For example
docker-compose up -d --build --remove-orphans
This worked for me, just changed the app's port to 8084:80, like described here.

How to make artisan serve work in virtual host?

I'm on Windows 8 and I'm using WAMP to run my laravel project. I have configured apache and I created a virtual host, to access my app through http://myapp.dev.
I would like to know if it's possible to use the built-in php server (to run the laravel aplication through artisan serve) to point to my virtualhost instead of http://localhost:8000.
I tried to change the app url in app.php but it didn't work.
Point myapp.dev to 127.0.0.1 in your hosts file, and do php artisan serve --host 0.0.0.0 --port 80.
In Linux/OSX, this requires sudo privilege, I'm not sure what Windows will require. You'd want to stop Apache too, as it uses port 80 and will cause a conflict if both are trying to run on that port.
You need change the hosts file (ubuntu => /etc/hosts, windows => $WINDIR/System32/drivers/etc/hosts).
127.0.0.1 myapp.dev
.env file also needs changing:
APP_URL=tttp://myapp.dev:8000
$ php artisan serve --host=myapp.dev

Categories