I just updated my Laravel application from 5.8.x to 6.18.x. I also updated the ENV name declaration to reflect the new Laravel pattern.
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION.
I set AWS_DEFAULT_REGION to eu-west-1 since I am using eu-west-1.amazonses.com in the SES setup.
But when I try to send an email now, I get: Error executing "SendRawEmail" on "https://email.eu-central-1.amazonaws.com" even though eu-central-1 is nowhere declared inside my app. I've been trying to wrap my head around this for a while now, but cannot find a solution.
Also, it seems like AWS wants me to verify the to address, which is even more confusing. I've been out of the sandbox for over 2 years and on the live server with the older Laravel instance the mail still works just fine.
I have no actual code, since this is just stuff in my ENV file and this inside my config/services.php file:
'ses' => [
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
],
I really don't know what else I could check.
I suggest you set the region directly on config/services.php and see if it works. If it works, I would check why the values set on the environment file is not passing down the chain.
Hope this helps.
Thanks for all your suggestions. The problem was actually not AWS related. After the upgrade from 5.8 to 6.x the old .env file was still used. So any changes I did to the .env file were not used. I don't know why though, as I cleared all caches and config files after the upgrade. But it's working again now.
Related
I'm getting a "Could not construct ApplicationDefaultCredentials" from Google Recaptcha Enterprise, but only on our remote server. I've tried everything I can think of to isolate the issue, but I've had no luck.
I have two Recaptcha Enterprise keys: One for testing, and one for prod.
The testing key works fine on localhost. I've tried both the testing and prod key on our staging server, but I keep getting the same error.
Could not construct ApplicationDefaultCredentials
Things I've checked:
The key is successfully requesting tokens (I can see them in the form)
The service account .json credentials are being picked up correctly (I've tried outputting the contents to ensure they can be read)
The domains are correctly configured and allowed (Google helpfully lets you know if this isn't the case)
The Project ID is also correctly being picked up and sent
Basically all the values are present (project ID, site ID, service account details) and the domain is allowed, but as soon as it's on the remote staging server, it is failing to create credentials.
I'm struggling to figure out what the difference could be.
public static function createRecaptchaAssessment(
string $siteKey, // Present
string $token, // Present
string $projectId // Present
): Assessment {
$options = [
'keyFile' => config('services.google.app_credentials'), // Present
'projectID' => $projectId
];
$client = new RecaptchaEnterpriseServiceClient($options); // <-- Throws exception for ApplicationDefaultCredentials not being able to be created
...
Things to consider: The staging server is hosted on an elasticbeanstalk.com subdomain, and the site is password protected with .htpasswd. I know sometimes elasticbeanstalk.com is blacklisted because it is a blanket domain, but we're only specifying the subdomain and there's no "This domain is not allowed" message from Google. And there shouldn't be any inbound connections being blocked by .htapasswd that I'm aware of.
I've tried creating a new Service Account, just incase there was something configured incorrectly (it has Recaptcha Enterprise Agent permissions) but nothing changed.
Any ideas on how else I could debug this would be gratefully appreciated. (Note: This is a PHP/Laravel 9 project hosted on AWS Elastic Beanstalk, but I don't think that's a factor.)
Key takeaway: The ApplicationDefaultCredentials error is almost certainly related to the Service Account .json not being picked up by your application.
Full version: So apparently I took some poor advice from another SO answer which suggested you could pass the path to the Service Account credentials .json file through the following array:
$options = [
'keyFile' => config('services.app_credentials'), // DON'T DO THIS
'projectID' => $projectId
];
I spent a long time looking through the Google library only to discover that neither array keys are used. Actually what happens is this:
If the library isn't passed a path to the Service Account credentials .json file, it then looks for an environment variable called GOOGLE_APPLICATION_CREDENTIALS.
If you have this ENV and your environment supports it, that's actually what it is going on. (Which is why you might find that it works locally, but not remotely, as I did.)
When you deploy Laravel remotely, all the .env variables are cached and so not available through getenv()... meaning the library is not able to find GOOGLE_APPLICATION_CREDENTIALS even if you have it included in your .env.
The solution is to add the path through an array key credentials:
$options = [
'credentials' => config('services.app_credentials')
];
$client = new RecaptchaEnterpriseServiceClient($options);
Now it works perfectly.
I recently added laravel/nexmo-notification-channel to my laravel project which also installed Nexmo/nexmo-laravel.
After installing, I published vendor files so that I get config/nexmo.php and in there I noted that it looks in the .env file for NEXMO_KEY and NEXMO_SECRET.
So I went ahead and created these within my .env file
NEXMO_KEY=[my_key]
NEXMO_SECRET=[my secret]
NEXMO_SIGNATURE_SECRET=[my signature secret]
After this, I added Nexmo to my service providers in app.php:
'providers' => [
...,
Nexmo\Laravel\NexmoServiceProvider::class
]
and also added the following in config/services.php:
'nexmo' => [
'key' => env('NEXMO_KEY', ''),
'secret' => env('NEXMO_SECRET', ''),
'sms_from' => '[my number]'
],
But I still get the following error when thrying to send an SMS using the use Illuminate\Notifications\Messages\NexmoMessage; class:
"message": "Provide either nexmo.api_secret or nexmo.signature_secret",
I can use these same credentials to send an SMS from CLI, so why can't I send it from laravel?
There have been a couple of workarounds for this that are valid, but at first glance it looks like the Nexmo package does the work to bring in the ENV secrets into Laravel's config. Because of caching problems, you should never call env() within Laravel, instead you should be using config() - so in this case, config(nexmo.api_secret).
My main point here though is that I can't look into the "correct" solution for you because the package is abandoned. Nexmo is no longer Nexmo, it's Vonage, and Laravel core team have subsequently updated the notification-channel package.
For supported use to integrate Vonage services (SMS), please use the following package:
https://github.com/laravel/vonage-notification-channel
I'm not sure exactly why, but, Vonage/Nexmo doesn't pick details from the .ENV.
Instead, use a global constant to fetch the secrets:
Create a global.php file in the config folder, and add your secrets from the env like this:
<?php
return [
// Other constants values
'SMS_API_KEY' => env('SMS_API_KEY', ''),
'SMS_API_SECRET' => env('SMS_API_SECRET', ''),
]
?>
Then, you can use the constants in your controller as usual:
'key' => config('global.SMS_API_KEY'),
'secret' => config('global.SMS_API_SECRET')
then: recache, php artisan config:cache
Hi I am using aws SDK Version 3 for php to upload files on s3
I need to get rid of credentials file ( .aws/credentials) because it's causing issues on my production server,
The hard coded credentials method isn't working in my code. link pasted below.
https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_credentials.html#hardcoded-credentials
kindly provide a valid and working solution how to use hard coded credentials.
please note if i use credential file everything works OK. so the problem is with credentials code.
here is my code when I initiate my s3 object
$s3Client = new S3Client([
'profile' => 'default',
'region' => 'us-west-2',
'version' => '2006-03-01',
'scheme' => 'http',
'credentials'=>[
'key' => KEY,
'secret' => SECRET
]
]);
You just need to remove the 'profile' => 'default', line, which has the effect of overriding your hard-coded credentials.
I've been dealing with your same problem with much frustration today, and finally solved it. See related answer here for the same problem on a different Amazon service.
per AWS documentation, https://docs.aws.amazon.com/aws-sdk-php/v2/guide/credentials.html
If you do not provide credentials to a client object at the time of
its instantiation (e.g., via the client's factory method or via a
service builder configuration), the SDK will attempt to find
credentials in your environment when you call your first operation.
The SDK will use the $_SERVER superglobal and/or getenv() function to
look for the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment
variables. These credentials are referred to as environment
credentials.
V3 doc here https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/guide_credentials.html
In my case I am using an IAM role in the machines which host the app, it is easier to manage permissions from IAM dashboard and you will avoid hardcoded or config file with credentials.
Full Error
RequestException in CurlFactory.php line 187: cURL error 60: SSL certificate problem: unable to get local issuer certificate (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
Scenario
Before anyone points me to these two laracasts answers: https://laracasts.com/discuss/channels/general-discussion/curl-error-60-ssl-certificate-problem-unable-to-get-local-issuer-certificate
https://laracasts.com/discuss/channels/general-discussion/curl-error-60-ssl-certificate-problem-unable-to-get-local-issuer-certificate/replies/52954
I've already looked at them and that's why I'm here.
The problem I have is that now I have the cacert.pem file BUT it doesn't make sense where to put it, the answers indicate to place the file in my xampp directory and change my php.ini file but I'm not using xampp for anything, I'm using Laravel's artisan server to run my project.
If xampp is not in use, where should I place this file? Moreover, why would an accepted answer be to place it in my xampp directory? I dont understand.
My Exact Question
Where do I place the cacert.pem file to stop this error in laravel 5.4?
Do not ever modify files in the vendor/ folder. Ever. They can and will be overwritten on the next composer update you run.
Here is my Solution for WampServer
I am using PHP 7.1.9 for my WampServer, so change 7.1.9 in the example below to the version number you are currently using.
Download this file: http://curl.haxx.se/ca/cacert.pem
Place this file in the C:\wamp64\bin\php\php7.1.9 folder
Open php.iniand find this line:
;curl.cainfo
Change it to:
curl.cainfo = "C:\wamp64\bin\php\php7.1.9\cacert.pem"
Make sure you remove the semicolon at the beginning of the line.
Save changes to php.ini, restart WampServer, and you're good to go!
A quick solution but insecure (not recommended).
Using cURL:
Set CURLOPT_SSL_VERIYPEER to false
Using Guzzle: Set verify to false, for example
$client->request('GET', 'https://somewebsite.com', ['verify' => false]);
You can use GuzzleHttp\Client:
$client = new Client(['verify' => false]);
Solution suggested by some users to make changes to \vendor\guzzlehttp\guzzle\src\Client.php file is the worst advice, as manual changes made to vendor folder are overwritten if you run composer update command.
Solution suggested by Jeffrey is a dirty, shorthand fix but not recommended in production applications.
Solution suggested by kjdion84 is perfect if you have access to php.ini file on web server. In case you are using Shared Hosting, it may not be possible to edit php.ini file.
When you don't have access to php.ini file (e.g. Shared Hosting)
Download this file: http://curl.haxx.se/ca/cacert.pem or https://curl.se/docs/caextract.html
Place this file in the root folder of your Laravel project.
Add verify key to GuzzleHttp\Client constructor with its value as path to cacert.pem file.
With Laravel 5.7 and GuzzleHttp 6.0
// https://example.com/v1/current.json?key1=value1&key2=value2
$guzzleClient = new GuzzleHttp\Client([
'base_uri' => 'https://example.com',
'verify' => base_path('cacert.pem'),
]);
$response = $guzzleClient->get('v1/current.json', [
'query' => [
'key1' => 'value1',
'key2' => 'value2',
]
]);
$response = json_decode($response->getBody()->getContents(), true);
This was stressfull to figure out but here is the exact answer for people using laravel and have this problem.
My exact application versions are...
Laravel: 5.4
Guzzlehttp: 6.2
Laravel Socialite: 3.0
Download a fresh copy of this curl certificate from this link: https://gist.github.com/VersatilityWerks/5719158/download
Save the file in this path starting from the base root of your laravel application vendor/guzzlehttp/guzzle/src/cacert.pem
next in that same directory open up RequestOptions.php and scroll down to the constant called CERT and change it to this const CERT = 'cacert.pem'; and this should fix it all.
EDIT
As people are pointing out you should never edit the vendor folder, this was just a quick fix for an application I was building in my spare time. It wasn't anything majorly important like an application for my company or anything, use this method at your own risk! Please check out the other answers if you need something more concrete.
I was sending a request from domain X to Y, my problem was with the certificate used on the domain Y (it wasn't a self-signed cert btw), the domain belonged to me & I had the certificate, so the quick fix was to use the domain Y's certificate on domain X's application:
On domain X:
Using Laravel 7 HTTP wrapper:
\Http::withOptions(['verify' => 'path-to-domain-Y-certificate']);
Using guzzle:
new GuzzleHttp\Client(['verify' => 'path-to-domain-Y-certificate']);
OR you could just contact your SSL provider to resolve the issue.
Note: Storing your domain's certificate in another system isn't a secure method, use this method only if you can ensure your certificate's security.
I haven't found what was wrong with the certificate, no browser seemed to have a problem with it, the only problem was generated on laravel using curl.
I had this problem while running Valet and attempting to make an api from one site in valet to another.
Note that i'm working in OSX.
I found the solution here: https://github.com/laravel/valet/issues/460
In short you have to copy the valet pem to the system CA bundle.
Just run this:
cp /usr/local/etc/openssl/cert.pem /usr/local/etc/openssl/cert.pem.bak && cat ~/.config/valet/CA/LaravelValetCASelfSigned.pem >> /usr/local/etc/openssl/cert.pem
I use laragon server and I faced to the same problem. I downloaded ssl certificate from http://curl.haxx.se/ca/cacert.pem and paste it in C:/laragon/etc/ssl/
(if cacert.pem already exists replace it with new one).
restart server and everything is fine
We can set path based on this article on Medium.com How to fix cURL error 60: SSL certificate problem
Steps to follow:
Open http://curl.haxx.se/ca/cacert.pem
Copy the entire page and save it as a “cacert.pem”
Open your php.ini file and insert or update the following line. curl.cainfo = " [pathtofile]cacert.pem"
Another one recently asked for the same problem and it's seems my answer was the solution for him.
Here was the post I mention : URL Post
That's what I said :
I'll be fully honest, I don't know anything about Laravel. But I had
the same problem, so as many other, on Symfony. And so as you I tried
many things without success.
Finally, this solution worked for me : URL solution
It indicates that instead of a certificate problem, it could came
from a environnement non-compatibility. I used XAMPP instead of
WAMP and it worked.
Solved it on my end by disabling verify completely when on debug setting since in this local scenario security is not an issue at all.
$http = new \GuzzleHttp\Client([
'base_uri' => config('services.passport.login_endpoint'),
'verify' => config('app.debug') ? false : true,
'defaults' => [
'exceptions' => false,
]
]);
curl.cainfo = "C:\wamp64\bin\php\php7.1.9\cacert.pem"
I had resolved my problem with wamp.
This happened to me in development with laravel 8 using the default server started by running:
php artisan serve
The solution for me was similar to other answers that require you to download cacert.pem and edit php.ini file. But since I was not using Wamp, I checked the path in my environment variables to find which php installation windows defaulted to and made the changes there.
For php version ^8.1 in wampserver
Download--> https://curl.se/ca/cacert.pem.
Place the cacert.pem file inside C:\wamp64\bin\php\php8.1.0.
Open php.ini and find this line:
;curl.cainfo
Change it to:
curl.cainfo = "C:\wamp64\bin\php\php8.1.0\cacert.pem"
Make sure you remove the semicolon at the beginning of the line.
Save changes to php.ini, restart WampServer64, and you're good to go!
You need to edit the following file in vendor/guzzlehttp/guzzle/src/Client.php
$defaults = [
'allow_redirects' => RedirectMiddleware::$defaultSettings,
'http_errors' => true,
'decode_content' => true,
'verify' => false, // By default this value will be true
'cookies' => false
];
May be security issue was there, but it will work.
for Laravel: The 5 steps below will be helpful
update version to Guzzlehttp: 5.2
find the file under \vendor\guzzlehttp\guzzle\src\Client.php
edit default settings to
protected function getDefaultOptions() {
$settings = [
'allow_redirects' => true,
'exceptions' => true,
'decode_content' => true,
'verify' => getcwd() .'/vendor/guzzlehttp/guzzle/src/cacert.pem'
];
}
download latest file cacert.pem from http://curl.haxx.se/ca/cacert.pem and place under /vendor/guzzlehttp/guzzle/src/
What I wanted to do, is to protect my private files, so that only their owners could access them. I changed my public disk to storage folder:
'orders' => [
'driver' => 'local',
'root' => storage_path('files/orders'),
],
And created a routed controller to be able to make files private (I intentionally excluded this logic from the example):
Route::get('files/{slug}', [
'as' => 'get.file',
'uses' => 'FileController#getFile',
]);
class FileController extends Controller
{
public function getFile($slug)
{
// I use spatie/laravel-medialibrary
// to manage file uploads.
$media = Media::where('slug', $slug)->first();
return response()->file($media->getPath());
}
}
The getPath() is a spatie/laravel-medialibrary method that returns the correct path to the file: /home/vagrant/code/project/storage/files/orders/2/b414a7416571145ea9dcf9bda9a845a2.jpg. It clearly finds the file, because I can use \Symfony\Component\HttpFoundation\File\File() to get files’ data like their mime type or size (and also don’t get a 404 error as a response); when I follow, say, http://localhost:3000/files/2RmYR3 route I get this blank 200 response—looks like so:
One person tested pretty much the same code on his end, and it worked, but he was on Valet, so it might be a Homestead issue. He also suggested it might be an issue with headers, and I tried sending different headers (content-type/length, for instance), but nothing helped, so please see what I’m receiving without specifying any headers for the response:
I also tried vagrant destroy, just in case, but the issue stayed. In any case, thank you for your time.
It resolved itself. I started stripping pieces from the application to understand which part of the code messes with responses, at first I thought it was my view composer, but no; I did a fresh separate Laravel installation, and copied config/app.php from the fresh installation to my project, then just updated lists of providers and aliases that I had before, and that did it.
My app.php lacked 'log_level' => env('APP_LOG_LEVEL', 'debug'), line that was added at some point to Laravel after I started the project, and maybe that’s why responses didn’t work properly, but I doubt it.