This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(41 answers)
How to access host port from docker container [duplicate]
(17 answers)
Closed 4 years ago.
folks.
I have a servive running on my host machine. It is a NodeJS app with Express. It works fine at "localhost:3000".
Then, in a separate project, I have a Laravel App running fine inside Docker, and I access it at "http://localhost".
Now, my Laravel app needs to call the NodeJS app. I saw in Docker documentation I should use "host.docker.internal", since it will resolve to my host machine.
The this->http is a Guzzle\Client instance.
In my PHP code I have this:
$response = $this->http->request('POST', env($store->remote), [
'form_params' => [
'login' => $customer->login,
'password' => $customer->password,
]);
If I call the NodeJS app from Postman it works fine. But calling from that PHP I got this error:
"message": "Client error: `POST http://host.docker.internal:3000` resulted in a `404 Not Found` response:\n<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"utf-8\">\n<title>Error</title>\n</head>\n<body>\n<pre>Cannot POST /</p (truncated...)\n",
"exception": "GuzzleHttp\\Exception\\ClientException",
"file": "/var/www/html/vendor/guzzlehttp/guzzle/src/Exception/RequestException.php",
"line": 113,
Does anyone have any clue how I can call my node app from PHP in Docker?
EDIT
I was thinking if I should not open the port 80 and bind it to port 3000 in my PHP instance (since the request is running in php docker image). I put in my Docker file these ports attribute:
php:
build: ./docker
volumes:
- .:/var/www/html
- ./.env:/var/www/html/.env
- ./docker/config/php.ini:/usr/local/etc/php/php.ini
- ./docker/config/php-fpm.conf:/usr/local/etc/php/php-fpm.conf
- ./docker/config/xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
links:
- mysql
ports:
- "3000:80"
So, port 80 in my PHP instance would bind to my OSX port 3000. But Docker complains port 3000 is in use:
Cannot start service php: b'driver failed programming external connectivity on endpoint project_php_1 (241090....): Error starting userland proxy: Bind for 0.0.0.0:3000 failed: port is already allocated'
Yes! In fact it is allocated. It is allocated by my NodeJS app, that is where I want to go. It looks like I do not know very well how ports and DNS works inside Docker for Mac.
Any help is very appreciated.
SOLVED
Hey, guys. I figured it out. I turned off Docker container, point a regular Apache to my Laravel project and I got what was happening: CORS.
I already had cors in my Express app, but after configure it better, it worked!
Here it is, in case anyone stumbled here and needs it:
1) Add cors to your Express (if you haven't yet)
2) Configure cors to your domains. For now, I will keep it open, but, for production APPS, please, take care and control wisely who can query your app:
// Express app:
app.use(
cors({
"origin": "*",
"methods": "GET,HEAD,PUT,PATCH,POST,DELETE",
"preflightContinue": false,
"optionsSuccessStatus": 204
})
);
app.options('*', cors());
3) Use the host.docker.internal address (in my case, host.docker.internal:3000 , since my app is running on that port) from PHP to get to your Express App in OSX host machine. In my case, it will be a different domain/IP when it gets to production.
4) Just use Guzzle\Client to make your http call:
$response = $this->http->request('POST', env($store->remote) . '/store-api/customers/login', [
'json' => [
"login" => $customer->login,
"password" => encrypt($customer->password),
]
]);
A important point to note: Express waits for json (in my app, at least), so do NOT use "form_data", use "json" option to POST requests:
At least, it was NOT a duplication of the other answers, as marked by #Phil, because those answers points to the same solution I have already mentioned, use the 'host.docker.internal' address.
Related
I'm trying to run a Symfony application with Bref and serverless-offline. I know that Bref doesn't officially support serverless-offline, but I want to give it a shot; this thread - https://github.com/brefphp/bref/issues/875 - implies that it should be possible but I'm not getting the error described there yet.
When I run sls invoke local -f MyLambda I get this error:
{"errorType":"exitError","errorMessage":"RequestId: 09256631-dcc7-1775-350e-b0c10a2c9c00 Error: Couldn't find valid bootstrap(s): [/var/task/bootstrap /opt/bootstrap]"}
So I assume that serverless-offline fails to get the layer correctly. The directory .serverless/layers contains a sub-structure php-81-fpm/18 for the PHP layer but it is empty.
What's important is that I'm running this setup in a development Docker container. I mounted the docker socket so the Docker daemon on the host is used.
The serverless.yml looks like this (simplified):
plugins:
- ./vendor/bref/bref
- serverless-offline
functions:
MyLambda:
handler: public/index.php
layers:
- ${bref:layer.php-81-fpm} # also tried with 'arn:aws:lambda:eu-central-1:209497400698:layer:php-81-fpm:18'
custom:
serverless-offline:
host: 0.0.0.0
useDocker: true
dockerHost: host.docker.internal
I'm happy to provide further information, please let me know. Don't hesitate to suggest "low-level" things as I'm quite new to this ecosystem. Thanks for your help!
I'm trying to make a publish with the Hub Interface but everytime I get this error
SSL connect error for "https://localhost:8000/.well-known/mercure"
I'm running mercure with the Symfony Local Web Server, so mercure it's running on https://localhost:8000/.well-known/mercure. I have checked and it is running but I can't manage to make a publish. I'm making this call from a controller which is called from Vue.
$update = new Update(
[
sprintf("/conversaciones/%s", $conversacion->getId()),
sprintf("/conversaciones/%s", $recipiente->getUsuario()->getNombre()),
],
$mensajeSerializado,
true
);
$this->hubInterfacePublisher->publish($update);
Some comment on other question from Stackoverflow mentioned this:
Mercure with symfony not working with vue
Mecure dont support self signed certificates and you have to add verify_peer: false in config/dev/framework.yaml under http_client.default_configuration
Tryed it, but didn't work for me. Also, for what I have read, it is not recommended to set it false HTTP Certificates
I also tryed to change https to http on my .env, but I got a 401 Unauthorized.
I have checked on symfonycast tutorials and they have the .env enviroments on localhost:8000 so I think that can't be it, but I don't know what else I can try.
For what I know it has to do with CORS. But don't know how to fix it.
Any help is appreciated,
Thank you :)
I have a problem using docker compose link alias. I host a PHP application on docker and use curl in the application
this is my docker-compose.yml
php:
external_links:
- apache:my-apache
That links is working. I have my-apache item on /etc/hosts
172.17.0.5 my-apache
But the problem comes when i use php curl to access my-apache service, since php curl requires the strict format : http://domain.com, and i am unable to use http://domain.com as an alias for docker link.
This will produce error
apache:http://domain.com
The error :
ERROR: for php too many values to unpack
Any suggestions?
Big Thanks
So im trying to build 2 separate applications 1 that used as a backend (Laravel as a REST api) and Angular application as the client, eventually those 2 apps have to work together under the same domain as a single web app.
What im trying to accomplish:
My Angular app is a single page application that boot from index.html, all the routes are handled by Angular except /api/* that should be handled by Laravel app.
Im using 2 different apps in order to build web app more dynamic so i can easily change my backend framework and technologies and testing each app as a 'stand-alone' more easily.
I dont want to use CORS in my response headers because my REST API serves ONLY my Angular app and not other applications such as api for developers.
I want to use proxy that will foward all requests come from http://localhost:9100/api/* to: http://localhost:9000/api/*
Firstly im running Laravel on port 9000 by running:
php artisan serve --port 9000
And Angular app under port 9100 by running a gulp task (index.html is in the path ./src):
gulp.task('webserver', function(){
connect.server({
root: './src',
port: 9100,
middleware: function(connect, o) {
var url = require('url');
var proxy = require('proxy-middleware');
var options = url.parse('http://localhost:9000/api');
options.route = '/api';
return [proxy(options)];
}
});
});
Both apps work perfectly as a stand-alone, but when im trying to navigate to:
http://localhost:9100/api/v1/comments i receive the following error:
Error: connect ECONNREFUSED at errnoException (net.js:904:11) at Object.afterConnect [as oncomplete] (net.js:895:19)
I tried to investigate the cause of this problem, some people say it connected to my hosts file so i had to add the line:
127.0.0.1 localhost
But it doesnt work.
I tried different gulp task:
gulp.task('webserver', function() {
gulp.src("./src")
.pipe(webserver({
port: 9100,
livereload: true,
open: 'http://localhost:9100',
proxies: [
{
source: '/api', target: 'http://localhost:9000/api'
}
]
}));
});
And i receive the exact same error...
My develop environment is Windows 10 x64 bit.
Did you try to use http-proxy-middleware instead of proxy-middleware?
I experienced the same error with proxy-middleware. (Gulp browser-sync - redirect API request via proxy)
Error: connect ECONNREFUSED
at errnoException (net.js:904:11)
at Object.afterConnect [as oncomplete] (net.js:895:19)
Ended up creating http-proxy-middleware, which solved the issue in my case.
proxy-middleware somehow didn't work on the corporate network. http-proxy just did. (http-proxy-middleware uses http-proxy to do actual proxying)
Guess you are using gulp-webserver; The proxy can be added like:
var proxyMiddleware = require('http-proxy-middleware');
gulp.task('webserver', function() {
gulp.src("./src")
.pipe(webserver({
port: 9100,
livereload: true,
open: 'http://localhost:9100',
middleware: [proxyMiddleware('/api', {target: 'http://localhost:9000'})]
}));
});
Never found out why this error is thrown with proxy-middleware in the corporate network ...
Update:
Think this question has been answered:
It was a problem with the artisan server, had to run it this way:
php artisan serve --host 0.0.0.0
Source:
https://github.com/chimurai/http-proxy-middleware/issues/38
https://github.com/chimurai/http-proxy-middleware/issues/21#issuecomment-138132809
At my office I can connect directly to the relevant soap servers and my code works as it should.
However - because of security - it is not allowed that VPN connections (from home) access the SOAP servers. So I have to resort to using an SSH tunnel over a jump-station.
Because the WSDL files contain absolute urls I can't use the "location" to change this, it won't load the WSDL. Therefor I've resolved to adding entries to the hosts files mapping that server name to 127.0.0.1 and and SSH tunnel to our jumpstation forwarding the correct ports.
This allows me to use the original WSDL without modification. I just have to comment out the hosts entries when at the office.
Via SoapUI everything works. I can load the WSDL, it fully parses it (it has a lot of includes - big corporate soap service) and I can launch SOAP requests that get answered correctly.
However, if I do the same via the php SoapClient (running against an apache on localhost) it throws an exception:
SOAP-ERROR: Parsing Schema: can't import schema from 'http://wsdl.service.addres:port/wsdlUri?SCHEMA%2Fsoa.osb.model%2Fsrc%...'
(I've replaced the server and the rest of the request because its not relevant.)
If I take that entire URL and paste it into my browser it results in an WSDL-XML.
The PHP code to load create the SoapClient is as simple as this:
$options =
[
'trace' => 1,
'exception' => 1,
];
$wsdl = '/path/to/local/wsdlfile.xml';
$client = new \SoapClient($wsdl, $options);
Anyone have a clue where to look? I can't think of anything to try anymore.
I know for sure that my PHP is setup correctly (as the code has been working for months at the office and nothing was changed here). Is PHP doing DNS resolving differently and somehow getting another (or no) IP for that corporate server perhaps?