MAMP localhost on vagrant machine - php

I have to get the response from an external API on a Vagrant machine. The API url is:
http://local.foobar.vhost/controller/action/
Typing the URL on the host machine, with MAMP turned on and vhosts set properly, I get the proper response:
{"response": "success", "message": "not set"}
And the same goes for the cURL command:
curl 'http://local.foobar.vhost/controller/action/'
I get the same response here, so everything is ok.
But then I open vagrant with the vagrant ssh command, try to get the same response using curl command, and I get the following error:
Couldn't resolve host local.foobar.vhost
I also tried to add the port that MAMP uses, which is 80, so:
curl 'http://local.foobar.vhost:80/controller/action/'
But it gives me the same error.
How can I make this work?
Thanks!

Related

cURL on local WordPress site returns: Error 6 (Could not resolve host)

I have a local WordPress installation running at: https://catalogue3.test.
Note that all .test domains should resolve to localhost, as I use Laravel valet. When I execute the following code in my Laravel project however, I get an exception as shown below.
$client = new \GuzzleHttp\Client();
$response = $client->request('GET', "https://catalogue3.test", ['verify' => false]);
ConnectException
cURL error 6: Could not resolve: catalogue3.test (Domain name not
found) (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
When I run the command below in the terminal, the WordPress page is displayed.
curl https://catalogue3.test/ --insecure
Add
ip catalogue3.test
to your /etc/hosts file
I tried to add the domain to hosts and I tried to change dns in network settings, this answer is what worked for me.
Quick way to check if this is your problem is to do: curl --version
and php --ri curl
The versions should match. If they don't it's probably because brew
has installed curl-openssl. This can be removed by doing:
brew uninstall curl-openssl --ignore-dependencies
Maybe there's a way to configure the installed curl-openssl properly -
I've not investigated this yet.
I solved this adding catalogue3.test to /etc/hosts, even if I was using DnsMasq, and in theory, I wouldn't need it.
In my case (on macos) I had to add 127.0.0.1 as the first DNS server option in my WiFi settings.
Some useful info here too: https://github.com/laravel/valet/issues/736

How can i call my API in EC2 instance(ubuntu 16.04) using docker container

i call my api with following port:
http://IPv4 Public IP:8000/login
and i pull code through docker compose up. this will will give me all project configuration with php 7.1.8
php artisan serve
command will start successfully on 127.0.0.1:8000 this port.
but i use AWS EC2(ubuntu 16.04) instance. so, i call API "IPv4 Public IP:8000/login" in postman.
but it's give me an error:
could not get any response
there was an error connecting to IPv4 Public IP:8000/login
if you are using docker container it will allocate a port in your runing image
eg: your image is runing on 0.0.0.81
so you can set security group as "custom TCP rule" in 81 port
and it's runing
and your API call are runing like
http://youIp:81/api

php command is not recognized in wamp

I'm working on websocket. I came across this article, and simply downloaded their file and try running it in my localhost.
https://www.sanwebe.com/2013/05/chat-using-websocket-php-socket
What I understand is, they want to sue local server's websocket server.
But I have problem in starting up the server. I'm using windows 10 with wamp 2.2. As I checked webscket is enabled in my php.ini's extention.
I followed this example to cmd the right path to start it but to no avail:
https://www.sanwebe.com/2013/05/chat-using-websocket-php-socket/comment-page-1#comment-5593
It says 'php.exe' is not recognized as an internal or external ...
I then searched online again, thus set the path to my php folder in the system's environment variable. The path is: C:\wamp\bin\php. Then I closed the cmd and relaunched. Nothing worked out. The same error shows up.
This is what I did on cmd:
1) cd C:\wamp\bin\php
2) php.exe -q C:\projects\myfolder\server.php
Please help me to connect to wamp server's websocket to run the example I've downloaded.In the console, the error shown is:
WebSocket connection to 'ws://localhost:9000/demo/server.php' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
I found the answer from this site: http://rodrixar.blogspot.my/2011/07/how-to-run-php-sockets-in-wamp.html
I did the following,
1) open cmd
2) cd C:\wamp\bin\php\php5.6.19
3)php.exe -q C:\projects\mysite\server.php
4) Firewall open up and allow access for CLI
5) now connection made already..

Laravel 5.1 - HHVM - S3Exception in WrappedHttpHandler.php line 152

After upgrading to Laravel 5.1 from 5.0, I'm having problems with AWS S3.
I created a test route to verify that S3 was working and it seems that is is not:
get('/test', function() {
return Storage::disk('s3')->exists('temp/file.jpg') ? 'true' : 'false';
});
The following error is returned:
S3Exception in WrappedHttpHandler.php line 152:
Error executing "HeadObject" on "https://s3.amazonaws.com/rugapp/temp/file.jpg"; AWS HTTP error: Client error response [url] https://s3.amazonaws.com/app/temp/file.jpg [status code] 403 [reason phrase] Forbidden (client): 403 Forbidden
After doing some research, it seems this issue may or may not be related to HHVM. I am using Laravel Homestead which runs the following:
Ubuntu 14.04
PHP 5.6
HHVM
Nginx
After reading this, I upgraded HHVM to 3.8-dev and restarted Nginx. The problem remained.
Does anyone have any insight on how to resolve this problem?
UPDATE: It seems to work fine now but I'm not sure why. I haven't made any changes overnight. Strange.
Similar issue happens in my local homestead development while my app running on Linode works fine.
After checking S3 permission, checked out old version, etc., this problem disappear when I restart homestead.
homestead halt
homestead up --provision
I had this same error. I believe it happened because I switched wireless networks as I was developing. After restarting the virtual machine, the error went away.

Neo4J: test from php cmd line works but web doesn't?

Using the following bits:
<?php
require('vendor/autoload.php');
$client = new Everyman\Neo4j\Client('localhost', 7474);
print_r($client->getServerInfo());
?>
If I run this as php test.php I get the expected output.
If I run this via http://server/test.php I get connection errors.
[24-Jun-2014 05:49:52] PHP Fatal error: Uncaught exception 'Everyman\Neo4j\Exception'
with message 'Can't open connection to http://localhost:7474/db/data/' in
/var/www/html/vendor/everyman/neo4jphp/lib/Everyman/Neo4j/Transport/Curl.php:91
Clearly I've monkeyed up something with either my PHP config or the installation of this library. Suggestions on where to look?
Installed per these instructions.
Running on CentOS 6.4 (x64), PHP 5.3.3
NOTE: I've made successful connections from other machines back to this server so I know the neo4j server is working. It just doesn't seem to want to let me connect locally when called via browser.
I had same issue, it was caused by SeLinux
Try disabling it by:
echo 0 >/selinux/enforce
then recheck connection.
If solved configure SeLinux permissions.
In my case httpd_can_network_connect should be on
setsebool -P httpd_can_network_connect on
echo 1 >/selinux/enforce
Helpful manual:
http://wiki.centos.org/TipsAndTricks/SelinuxBooleans

Categories