Laravel 5.5 config() not showing data - php

I'am working on a old project in laravel 5.5. It going file till date. But now I am configuring Laravel Socialite, added all the provider credentials to services.php. But when redirecting to provider, system throws an error, formatRedirectUrl() must be of the type array, null given.
So I checked if the providers are set correctly. To check that I used config('services.facebook.client_id'), but it returns nothing.
Note:
I have tried clearing cache: php artisan config:clear
Caching again: php artisan config:cache
Tried config('app.name') which is returning app name as expected.
Any idea what's the reason of this unexpected behaviour?
services.php:
<?php
return [
/*
|--------------------------------------------------------------------------
| Third Party Services
|--------------------------------------------------------------------------
|
| This file is for storing the credentials for third party services such
| as Stripe, Mailgun, SparkPost and others. This file provides a sane
| default location for this type of information, allowing packages
| to have a conventional place to find your various credentials.
|
*/
'facebook' => [
'client_id' => env('FACEBOOK_CLIENT_ID'),
'client_secret' => env('FACEBOOK_CLIENT_SECRET'),
'redirect' => env('CALLBACK_URL_FACEBOOK'),
],
'google' => [
'client_id' => env('GOOGLE_CLIENT_ID'),
'client_secret' => env('GOOGLE_CLIENT_SECRET'),
'redirect' => env('CALLBACK_URL_GOOGLE'),
],
'twitter' => [
'client_id' => env('TWITTER_CLIENT_ID'),
'client_secret' => env('TWITTER_CLIENT_SECRET'),
'redirect' => env('CALLBACK_URL_TWITTER'),
],
];

Related

How to set Keycloak as authentication provider for humhub

I have a local apache2 server running humhub 1.3.14.
My goal is to set Keycloak located on my rancher cluster as the authentication provider for humhub.
After selecting "keycloak OpenId Connect" the user is successfully redirected to the keycloak server. After the user has authenticated, keycloak redirects back to my local humhub server.
There humhub complains:
"Unable to verify JWS: Unsecured connection" .
to validate the JWS, humhub uses yii2-authclient/src/OpenIdConnect.php which requires "spomky-labs/jose:~5.0.6" (which is abandoned, but yii2 does still use it).
in humhub/protected/vendor/yiisoft/yii2-authclient/src/OpenIdConnect.php setting
$validateJws = false
does nothing.
humhub/protected/config/common.php:
return [
'params' => [
'enablePjax' => false
],
'components' => [
'urlManager' => [
'showScriptName' => false,
'enablePrettyUrl' => false,
],
'authClientCollection' => [
'class' => 'yii\authclient\Collection',
'clients' => [
'keycloak' => [
'class' => 'yii\authclient\OpenIdConnect',
'issuerUrl' => 'https://xxxx/auth/realms/humhub',
'clientId' => 'humhub',
'clientSecret' => 'xxxxxxx',
'name' => 'keycloak',
'title' => 'Keycloak OpenID Connect',
'tokenUrl' => 'https://xxxx/auth/realms/humhub/protocol/openid-connect/token',
'authUrl' => 'https://xxxx/auth/realms/humhub/protocol/openid-connect/auth',
'validateAuthState' => 'false',
'validateJws' => 'false',
],
],
]
]
];
Can anyone help?
Further information required?
UPDATE
After updating "spomky-labs/jose" to "spomky-labs/jose:~6.1.0", the response from humhub changed to:
"Unable to verify JWS: The provided sector identifier URI is not valid: scheme must be one of the following: ["https"]."
UPDATE
I have enabled https also on my local apache2 server which runs humhub.
I also downgraded spomky-labs/jose back to version 5.0.6, because of compatibility problems with the current humhub version 1.3.14.
After that, the JWS error seems to be fixed but a new error accured:
Coult it be caused by the content type in the JWS which is not "application/json" but instead just "" (empty)?
if so, how can this be fixed?
Finaly i found the solution: It is not working well, because humhub does not hold the specifications in its OIDC adapter. After directing back from Keycloak, the following error accures:
The OpenId Connect 1.0 Specification describes, that an ID-Token has to be signed using a JWS (Json Web Signature). Keycloak does that, but does not set the "cty" field. As for https://www.rfc-editor.org/rfc/rfc7515#section-4.1.10 (RFC7515), this field is optional which means, that Humhub (v. 1.3.13) has a wrong implemented Open ID Connect 1.0 adapter because it sets this field to be mandatory.

Laravel - No emails sending using mailgun

Server: Digital Ocean
Ubuntu 16.04
Laravel 5.8
I cannot get email to send out of laravel using mailgun.com
In Digital Ocean I have all outgoing ports open on the firewall, I have the correct DNS settings in Digital ocean for TXT and MX records. I have the correct and verified DNS records on my domain registar and mailgun has a green checkmark on all
config/mail.php
return [
'driver' => 'mailgun',
'host' => 'smtp.mailgun.org',
'port' => 587,
'from' => [
'address' => 'orders#domain.com',
'name' => 'Company Name'
],
'encryption' => 'tls'),
'username' => 'orders#mg.domain.com',
'password' => 'xxxxd663hd02j727bb2eefd1ea38bbe0-58bc211a-670xxxx'
];
config/services.php
'mailgun' => [
'domain' => 'https://api.mailgun.net/v3/mg.domain.com',
'secret' => 'xxxxehbe8v25g3374e5as3ff32a45995-39bc661a-4716xxxx',
],
Controller
use Illuminate\Support\Facades\Mail;
$data = [
'email' => 'email#yahoo.com',
'name' => 'Bob Smith'
];
$body_data = [
'id' => '1234'
];
Mail::send('emails.shipped', $body_data, function($message) use ($data)
{
$message->to($data['email'], $data['name'])->subject('Test Email');
});
When I change mail driver to log and then check log file it looks great. Everything looks perfect and I have used mailgun before on Laravel 5.5 with no problems.
I have also tried the new laravel build method and same issue.
I get no errors, I checked logs on apache2, no logs are appearing in mailgun and of course no email comes through in inbox or spam.
My question is, am I missing anything? What other troubleshooting can I do? Seems like my app isn't connecting to mailgun correctly.
I think that in your config/services.php the mailgun.domain should be more like
mg.domain.com (or sandboxXXXXXXX.mailgun.org if that's a dev environment), and not a url like the one you've set.
Try to put:
'endpoint' => env('MAILGUN_ENDPOINT', 'api.mailgun.net'),
in your mailgun array.
I'm using laravel 5.8 and it's working with all default configuration
.env
MAIL_DRIVER=mailgun
MAILGUN_DOMAIN=sandbox8a408833ad1540e7b3a5d0151f606531.mailgun.org
MAILGUN_SECRET=92df7e85eeeaccaeae3d3b3164600666-87cdd773-8c819599
web.php
Route::get('send_test_email', function(){
Mail::raw('Sending emails with Mailgun and Laravel is easy!', function($message)
{
$message->to('your_test_email#gmail.com');
});
});
services.php
'mailgun' => [
'domain' => env('MAILGUN_DOMAIN'),
'secret' => env('MAILGUN_SECRET'),
],
I struggled with this very same problem (laravel + mailgun) for a FULL DAY.
This is what ultimately solved my problem. Hope this helps!
MAIL_DRIVER=mailgun
MAILGUN_DOMAIN=mail.mydomain.com
MAILGUN_SECRET=xxxxxxxxxxxxxxxxxxxxxxxxx
MAIL_FROM_ADDRESS=kp#mydomain.com
MAIL_FROM_NAME='KP'
In routes/web.php:
Route::get('/tst', function(){
Mail::raw('Sending emails with Mailgun and Laravel is easy!', function($message)
{
$message->to('kp#yahoo.com', 'K P')->subject('Hello there, how are you?');
});
echo "string";
});
Note: you will need to make sure that your MAILGUN_DOMAIN is set and the MX / DNS records exist on your server / domain. This can take up to 24 hrs to propagate (sadly, the most annoying part). But these are all the settings you would need.
You may try Installing SwiftMailer library in your server.
sudo apt install -y php-swiftmailer

Laravel - Images not loading in production server, work fine in local

I'm pretty new to working with laravel, and I just don't really understand how it deals with files. I followed the instructions in the documentation, used php artisan storage:link as instructed, but no go. I can't even see the images by going to their location in my url.
Here's my config/filesystems file:
<?php
return [
/*
|--------------------------------------------------------------------------
| Default Filesystem Disk
|--------------------------------------------------------------------------
|
| Here you may specify the default filesystem disk that should be used
| by the framework. The "local" disk, as well as a variety of cloud
| based disks are available to your application. Just store away!
|
*/
'default' => env('FILESYSTEM_DRIVER', 'local'),
/*
|--------------------------------------------------------------------------
| Default Cloud Filesystem Disk
|--------------------------------------------------------------------------
|
| Many applications store files both locally and in the cloud. For this
| reason, you may specify a default "cloud" driver here. This driver
| will be bound as the Cloud disk implementation in the container.
|
*/
'cloud' => env('FILESYSTEM_CLOUD', 's3'),
/*
|--------------------------------------------------------------------------
| Filesystem Disks
|--------------------------------------------------------------------------
|
| Here you may configure as many filesystem "disks" as you wish, and you
| may even configure multiple disks of the same driver. Defaults have
| been setup for each driver as an example of the required options.
|
| Supported Drivers: "local", "ftp", "s3", "rackspace"
|
*/
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
],
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL') . '/storage',
'visibility' => 'public',
],
'img' => [
'driver' => 'local',
'root' => storage_path('images')
],
's3' => [
'driver' => 's3',
'key' => env('AWS_KEY'),
'secret' => env('AWS_SECRET'),
'region' => env('AWS_REGION'),
'bucket' => env('AWS_BUCKET'),
],
],
];
and this is how I'm calling the images in the template:
<img src="{{asset('storage/images/icons/icon.png')}}">
I've checked, and the images are physically located at ROOT/storage/app/public/images.
What am I doing wrong here? And most importantly, why does this all work just fine in my local environment and not in my production server?
For additional info: The production server is hosted by Hostgator, in a subdomain of my company's main site. I don't know if that's an issue, as I said I am new to this whole thing.
The asset helper you are using in your Blade templates gives you the path for public assets, that is, assets located in the public folder of your Laravel application. Your static assets (such as logos, background images, etc) should be kept inside of your public directory, which is where your app is served from.
The storage directory is typically used for assets uploaded by a user in your application, such as a user uploading a new profile picture. To get the storage path, you would use the storage_path helper function to retrieve the path. Alternatively, you may use the Storage::url() method to retrieve a full URL to the storage path.

SFTP Upload Laravel 5

I’m trying to upload files to my staging server running Linux Ubuntu.
I tried follow everything in this post = https://laravelcollective.com/docs/5.1/ssh . I am on Laravel 5.1.
I configured my remote.php like this :
'connections' => [
'production' => [
'host' => '45.55.88.88',
'username' => 'root',
'password' => '',
'key' => '/Users/bheng/.ssh/id_rsa', //Try : public | private
'keytext' => '',
'keyphrase' => '******',
'agent' => '', //Try : empty | enabled | disabled,
'timeout' => 10,
],
],
I tried test it like this in my project :
SSH::run('date', function($line) {dd($line); });
I kept getting :
Unable to connect to remote server.
I tried it on my Mac OS Terminal
sftp root#45.55.88.88
It’s working fine, I got
sftp> ls
Desktop Documents Downloads Music Pictures Public Templates Videos dead.letter
What did I do wrong or forget ? What elses should I try ?
Do I need to do anything on my /etc/ssh/sshd_config ?
Can someone please help me out if you’re done this before ?
Try setting the connection in case it's not using the correct settings, and see if that helps:
SSH::into('production')->run('date', function($line) {dd($line); });
The only other things I would recommend are:
Verify that your key is accessible from your code
Correct passphrase is being used
Double-check any other connection settings

Laravel 4.2 behind ELB - Sessions not working

I'm running Laravel 4.2 with database session storage.
My application is running behind a Load Balancer.
When only one instance is running, my application works fine.
Then if i enable a second instance my application stops working.
My application adds a session to the database on almost every pageload, and i can't write/read sessions (and can't login).
Here is my session config
return array(
'driver' => 'database',
'lifetime' => 1440,
'expire_on_close' => false,
'files' => storage_path().'/sessions',
'connection' => "mysql",
'table' => 'sessions',
'lottery' => array(2, 100),
'cookie' => 'ycrm_session',
'path' => '/',
'domain' => null,
'secure' => false,
);
My sessions table looks like this
Have anybody experied that issue before?
I found the problem.
I'm using forge to manage and deploy my application. When i deploy my code its running composer install and its seems to edit my config/app.php .
I just redeployed my config/app.php and its starts working.

Categories