Overview of the problem
I am trying to get started with Laravel Dusk, however, I cannot get it to run correctly. Based on the feedback from when I run it in the console and some google searches and the fact I'd experienced the same issues with a similar package before Dusk was released I am guessing the issue is down to being behind a proxy at my work-place. I could of course be wrong about it being about the proxy but it's the only thing I have to go on for now. If I'm honest I don't even understand why it needs to know my proxy settings for running local tests locally?
Set up
Windows 7 64bit
Laravel 5.4.15
Dev env: Laragon
PHP 7
Dusk Installation
I did the following to install dusk:
composer require laravel/dusk
Set the following in .env: APP_URL=http://ticket.dev
Added the following to my AppServiceProvider register() method:
app/Providers/AppServiceProviders.php
use Laravel\Dusk\DuskServiceProvider;
//...
public function register()
{
if ($this->app->environment('local', 'testing')) {
$this->app->register(DuskServiceProvider::class);
}
}
Running Dusk
When I run php artisan dusk, I get the following in my console:
1) Tests\Browser\ExampleTest::testBasicExample
Facebook\WebDriver\Exception\WebDriverException: JSON decoding of remote response failed.
Error code: 4
The response: '<!DOCTYPE html>
<html>
<head>
<title>Laragon</title>
// the rest of the output is what http://localhost produces
So it appears to be hitting http://localhost and not my APP_URL?
Attempted solutions
I found this github wiki page and after browsing through the vendor folders related to Dusk I tried setting the proxy when the driver is set up in the driver() function in the file tests/DuskTestCase.php.
I've tried the following, but I get the same console output as mentioned before:
protected function driver()
{
// same as DesiredCapabilities::chrome() except with proxy info
$capabilities = new DesiredCapabilities([
WebDriverCapabilityType::BROWSER_NAME => WebDriverBrowserType::CHROME,
WebDriverCapabilityType::PLATFORM => WebDriverPlatform::ANY,
WebDriverCapabilityType::PROXY => [
'proxyType' => 'manual',
'httpProxy' => 'http://proxy:8080',
'sslProxy' => 'http://proxy:8080',
],
]);
return RemoteWebDriver::create(
'http://localhost:9515',
$capabilities,
);
// original code after installation
// return RemoteWebDriver::create('http://localhost:9515', DesiredCapabilities::chrome());
}
and ...
protected function driver()
{
$capabilities = new DesiredCapabilities([
WebDriverCapabilityType::BROWSER_NAME => WebDriverBrowserType::CHROME,
WebDriverCapabilityType::PLATFORM => WebDriverPlatform::ANY,
WebDriverCapabilityType::PROXY => [
'proxyType' => 'manual',
'httpProxy' => 'http://proxy:8080', // have also tried without specifying http://
'sslProxy' => 'http://proxy:8080', // have also tried without specifying http://
],
]);
return RemoteWebDriver::create(
'http://localhost:9515',
$capabilities,
null,
null,
$http_proxy = 'http://proxy', // have also tried without specifying http://
$http_proxy_port = '8080',
null
);
}
Any help on getting this working would be appreciated, thank you!
Related
So I'm running CakePHP 4 on an EC2 instance, AWS ES 7 and I've setup the ElasticSearch plugin in CakePHP.
composer require cakephp/elastic-search "^3.0"
I've added the elastic datasource connection in config/app.php
'elastic' => [
'className' => 'Cake\ElasticSearch\Datasource\Connection',
'driver' => 'Cake\ElasticSearch\Datasource\Connection',
'host' => 'search-DOMAIN.REGION.es.amazonaws.com',
'port' => 443,
'transport' => "AwsAuthV4",
'aws_access_key_id' => "KEY",
'aws_secret_access_key' => "SECRET",
'aws_region' => "REGION",
'ssl' => 'true',
],
.. and I've activated the plugin
use Cake\ElasticSearch\Plugin as ElasticSearchPlugin;
class Application extends BaseApplication
{
public function bootstrap()
{
$this->addPlugin(ElasticSearchPlugin::class);
I've manually added 1 index record to ES via curl from the EC2 instance. So I know the communication between EC2 and ES works.
curl -XPUT -u 'KEY:SECRET' 'https://search-DOMAIN.REGION.es.amazonaws.com/movies?pretty' -d '{"director": "Burton, Tim", "genre": ["Comedy","Sci-Fi"], "year": 1996, "actor": ["Jack Nicholson","Pierce Brosnan","Sarah Jessica Parker"], "title": "Mars Attacks!"}' -H 'Content-Type: application/json'
I also managed to search for this record via curl without any problems.
In the AppController.php I tried this simple search just to see if the plugin works and for the life of me, I can't get it to work.
# /src/Controller/AppController.php
...
use Cake\ElasticSearch\IndexRegistry;
class AppController extends Controller
{
public function initialize(): void
{
parent::initialize();
$this->loadModel('movies', 'Elastic');
$query = $this->movies->find('all');
$results = $query->toArray();
I'm getting the following error:
Client error: POST
https://search-DOMAIN.REGION.es.amazonaws.com/movies/movies/_search
resulted in a 403 Forbidden response: {"message":"The security token
included in the request is invalid."}
Elastica\Exception\Connection\GuzzleException
Seems like the plugin adds the 'Index' name twice for some reason. I looked everywhere for a setting that I might have missed. If I copy and paste the above URL and remove the duplicate Index from the URL in a browser it works fine.
https://search-DOMAIN.REGION.es.amazonaws.com/movies/_search
Am I missing something here?
I've even tried this method, and I get the same problem with duplicated index values in the url.
$Movies = IndexRegistry::get('movies');
$query = $Movies->find('all');
$results = $query->toArray();
I've tried a new/clean CakePHP instance and I get the same problem? Is there something wrong with the plugin? Is there a better approach to communicate with ES via CakePHP?
I'm not familiar with the plugin and Elasticsearch, but as far as I understand, one of the movies is the index name, and one of them is the type name, where the type name - at least according to the documentation - should be singular, ie the path would instead be expected to look like:
/movies/movie/_search
Furthermore, Index classes assume that the type mapping has the singular name of the index. For example the articles index has a type mapping of article.
https://book.cakephp.org/elasticsearch/3/en/3-0-upgrade-guide.html#types-renamed-to-indexes
Whether that would be the correct path with respect to what is supported by the used Elasticsearch version, that might be a different question.
You may want to open an issue over at GitHub.
Ive been running into issues with Laravel Cashier when i deployed my app to heroku.
One my local environment everything is fine but on my staging server , no POST request body is ever sent to stripe.
I tried swapping api keys as i thought maybe the api version on stripe differs between the two but that doesn't work (see screenshots below)
Things i know are correct
API creds , they wont show up on stripe logs if it wasent
Composer version matches both environments (Laravel Cashier 10.5.2, Laravel 5.8.36, Stripe-php 17.7.0)
I cant seem to find anything that logs out going api requests. Ive even tried manually calling the stripe functions as low as i can get in the stack still no POST body.
Im sure some one else has ran into this. Google search on laravel cashier ALWAYS sends me back to the laravel website, like WTF.
this is my stripe method on my User model. All other code is from Cashier
public function activateSubscription() {
if ($this->hasStripeId() &&
$this->has_default_payment_method &&
$this->has_active_subscription) {
return;
}
try {
$this->newSubscription(env('STRIPE_SUBSCRIPTION_NAME'), env('STRIPE_PLAN_ID'))
->create(null, [
'name' => $this->fullname,
'email' => $this->email,
]);
$this->notify(new UserRegistered());
} catch (\Stripe\Exception\InvalidRequestException $e) {
Log::debug('Invalid Request', [
'body' => $e->getHttpBody(),
'headers' => $e->getHttpHeaders(),
'json' => $e->getJsonBody(),
'error_code' => $e->getStripeCode(),
]);
}
}
Edit
Ive removed some personal details from the POST request body
I figured it out , i had a \n at the end of my stripe secret api key on heroku environment variables.
For some reason that caused all requests to stripe to strip the POST body.
Removed that, ran a php artisan config:clear and it worked
I have aws s3 localstack in docker-compose declared as:
version: "3"
services:
...
localstack:
image: localstack/localstack
environment:
- SERVICES=s3
- USE_SSL=false
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
ports:
- "4572:4572"
- "4566:4566"
- "8083:8080"
networks:
- mynetwork
After build everything works fine. I am able to connect to the image:
docker exec -ti my-project_localstack_1 /bin/bash
And make a new bucket using command line:
awslocal s3 mb s3://my-bucket
Initially I was able to put new objects into the bucket from my php app.
But was not able to see/view list of them from php/postman/browser.
I've made some research and found this solution.
awslocal s3 mb s3://my-bucket
awslocal s3api put-bucket-acl --bucket my-bucket --acl public-read
Now, I am able to get list of objects by prefix in anonymous mode (no credentials or tokens) in my Chrome browser and using Postman.
But I fail to get $s3Client->listObjects(...). It always returns empty result.
Note: I am still able to execute $s3Client->putObject(...).
And I checked another commands $s3client->getBucketAcl(...) and $s3Client->getObjectUrl(...). They work fine.
What I want to say, connection to the localstack host from php is fine and instance is working and responding fine.
Here is the code on php side that I use to instantiate $s3Client:
class S3
{
/** #var \Aws\S3\S3Client */
private static $client = null;
private static function init() // Lazy S3client initiation
{
if (is_null (self::$client)) {
self::$client = new Aws\S3\S3Client ([
'region' => 'us-east-1',
'version' => '2006-03-01',
'credentials' => false,
'endpoint' => "http://localstack:4572",
'use_path_style_endpoint' => true,
'debug' => true
]);
}
}
...
public static function list_objects($bucket, array $options)
{
self::init();
return self::$client->listObjects([
'Bucket' => "my-bucket",
'Prefix' => "year/month/folder/",
'Delimiter' => $options['delimiter'] ? $options['delimiter'] : '/',
]);
}
...
}
This method returns #metadata->effectiveUri :
array (size=2)
'instance' => string '0000000040d78e4d00000000084dbdb3' (length=32)
'data' =>
array (size=1)
'#metadata' =>
array (size=4)
'statusCode' => int 200
'effectiveUri' => string 'http://localstack:4572/my-bucket?prefix=year%2Fmonth%2Ffolder%2F&delimiter=%2F&encoding-type=url'
If I take this url and run it in browser or postman or php docker terminal curl it returns list of my files. It only returns empty array when I call it though s3Client in php.
I have a feeling that something is wrong with permissions. But since I don't have that much knowledge and experience with aws-s3 service I can't figure that out. And it seem confusing that some "default" permissions allows client to put objects but restrict to read index. And I can read index of objects using browser or curl, but not through the app.
Any ideas?
I've encounted the same problem.
However, I changed the docker-compose as follows, then I could avoid the problem.
image: localstack/localstack:0.11.0
I think it may be a degradation issue of localstack.
I'm working with buddy.works for continuous integration of my project. The issue is though, my phpunit tests pass on my local computer but fail on the buddy works pipeline.
I have tried googling and reading for over two days now and although I found many similar problems, I haven't encountered a solution that can even point me in the right direction.
public function test_orders_route_unauthenticated_user () {
$data = [
'orderID' => '001241',
'sku' => '123456',
'quantity' => 9,
'pricePerUnit' => 78,
'priceTotal' => 702,
];
$this->json('POST', 'api/orders', [$data])->assertStatus(401);
}
The test fails with status code 500 instead of 401 and I don't know what is causing this.
Edit:
Laravel 5.8.17, php unit which comes integrated with it, works as expected until being run on buddy works.
Had the same problem while testing my laravel 8 api.
I had to send header with my request.
public function testGetUsersWithoutAuthentication()
{
$response = $this->withHeaders(['Accept' => 'application/json',])->get('api/users');
$response->assertStatus(401);
}
In my project I use code below to add to assetic some named assets and one of them use lessphp filter.
public function prepend(ContainerBuilder $container)
{
$configs = $container->getExtensionConfig($this->getAlias());
$config = $this->processConfiguration(new Configuration(), $configs);
$this->configureAsseticBundle($container, $config);
}
protected function configureAsseticBundle(ContainerBuilder $container, array $config)
{
foreach (array_keys($container->getExtensions()) as $name) {
switch ($name) {
case 'assetic':
$container->prependExtensionConfig(
$name,
array(
'assets' => array(
'some_less' => array(
'inputs' => array(
'#SomeBundle/Resources/public/less/some.less'
),
'filters' => array('lessphp'),
),
)
)
);
break;
}
}
}
When I dump assets using assetic:dump everything is working fine in production enviroment but in dev enviroment lessphp filter for this named asset works only after few page refresh and after some time it doesn't work anymore and I need to remove all cache. After remove cache it work fine again... for few minutes...
I also noticed that it stop working when I edit any bundle extension class (DependencyInjection/[BundleName]Extension.php).
Does anyone have any idea what i did wrong?
I suspect this is because of the issue reported here. There's a bug in the Assetic code that will incorrectly "clear" out the filters for an asset during rendering, so they are never applied.
You should be able to reliably reproduce it by clearing the cache with php app/console cache:clear. But you should then be able to "fix" it by completely removing the dev cache files and reloading the page.
The PR I referenced is not get committed (it's waiting for a test), but it's a couple lines of a code you can manually add just to confirm it's the fix you're looking for.