In my client server, I send queue to AWS SQS using Artisan::queue. In .env file I configured QUEUE_DRIVER=sqs and in config/queue.php file I configured like below.
'default' => env('QUEUE_DRIVER', 'sqs'),
'sqs' => [
'driver' => 'sqs',
'key' => 'MY_AWS_KEY',
'secret' => 'MY_AWS_SECRET',
'prefix' => 'https://sqs.us-west-2.amazonaws.com/1234567890',
'queue' => 'queue-name',
'region' => 'us-west-2',
],
Now when I call Artisan::queue from Controller I see message is created in SQS. I can see them in AWS console and they are like below.
{"job":"Illuminate\\Foundation\\Console\\QueuedJob",
"data":[{"some_data_key":"some_data_value"}]}
Everything is fine so far I believe. But my worker tier never receives data. I have configured in Worker tier>Configuration>Worker Details like below:
Worker queue: queue-name
Worker queue URL: https://sqs.us-west-2.amazonaws.com/1234567890/queue-name
HTTP path: /worker
Here, my problem is that I always get 404 error on /worker address. As soon as message was sent, I see one count up in "Messages in Flight" in AWS SQS console and when I check Worker tier's log file, I see bunch of
`127.0.0.1 (-) - - [28/Jun/2017:08:37:10 +0000] "POST /worker HTTP/1.1" 404 204 "-" "aws-sqsd/2.3"`
I checked if post request to /worker returned error but it works okay in different server (I couldn't check in worker tier as I don't have URL address for it). At this point, Worker tier server has only
Route::match(['GET', 'POST'], 'worker', function () {
return 200;
});
in routes/web.php to see if POST request could reach there.
What did I do wrong? Did I miss something? Or implement it in wrong way?
If your are using your worker environment in elastic beanstalk, make sure your
Configuration->Software Configuration->Container Option->Document Root
is set to value
/public
Also check your route provider somewhere like
app/Providers/RouteServiceProvider.php
There check map() function to see if you are creating your http routes to appropriate file. There can be reference to multiple route files like web.php and api.php etc
Related
I am creating an API that uses websockets for real-time communication. I use Laravel 8 with Pusher as the broadcast driver for the backend, Soketi as the web socket server on windows (development environment) and I use postman to test the requests.
The problem is, my connection to the Soketi web socket server Always crashes after about 2 minutes of connectivity with a 4201 Error: Connection was closed due to an unknown error. I would like the connection to last a bit longer or ideally indefinitely, at least I think that's how it should work.
Console Connection message:
Console Disconnection message:
Both Postman and the console show the error, Note the time from connection to disconnection.
I did a work around to force postman to reconnect if an unexpected error caused it to close but that doesn't reconnect to the channels that were subscribed to before disconnection.
In config\broadcasting.php I have:
'pusher' => [
'driver' => 'pusher',
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'app_id' => env('PUSHER_APP_ID'),
'options' => [
'host' => env('PUSHER_HOST'),
'port' => env('PUSHER_PORT'),
'scheme' => env('PUSHER_SCHEME', 'http'),
'cluster' => env('PUSHER_APP_CLUSTER'),
'useTLS' => env('PUSHER_SCHEME') === 'https',
],
],
My Soketi config:
{
"debug": true,
"port": 6001,
"appManager.array.apps": [
{
"id": "SLMT",
"key": "1229272",
"secret": "sLq2dAswE&9q",
"webhooks": [
{
"url": "",
"event_types": ["channel_occupied"]
}
]
}
]
}
The key and secret values are only used for development that's why don't hide them. Any help is much appreciated beacuse this situation makes testing websocket events very tedious if the connection keeps breaking.
EDIT
I forgot to mention a crucial piece of information. My project is an API which uses Sanctum to authenticate users. It does not serve any Html. In case someone wants to reproduce.
Alright so I think I figured it out. I believe the socket connection closes due to an activity timeout.
How did I arrive at this conclusion?
I have another app which uses web sockets too but instead of an API it serves Html. Anyway I fired up the Soketi server but with different credentials specific to the Html project and I compared the console output with the API project, where the problem occurs. I noticed that on the Html app, which uses Laravel echo, the client constantly sends a pusher:ping event to the server every 30 seconds, and the server replies with a pusher:pong event.
The interesting part is that the pusher:pong event increments the _iddleStart: value under the Timeout property of the webSocket connection output, which essentially keeps the connection from closing due to inactivity. I think the reason for the 2 minutes delay comes from the _iddleTimeout:120000 property. (Please refer to the images on the question for clarity)
After this discovery I went back to my API application and made a pusher:ping event message on my Postman, which I then kept sending to the server Manually every minute, and the connection held intact!
The ping message that I mention looks like this on Postman:
{
"event":"pusher:ping",
"data":{}
}
So, why was it so hard to figure out in the first place?
I have to admit I had my suspicions but I didn't want to conclude anything without checking. The error shown by the server is not detailed or descriptive, it literally says Connection was closed due to an unknown error and there is no lookup table for the error code afaik.
What does this mean?
Well, this is a Postman problem in my opinion. A client is responsible for keeping the connection with the server open in a WebSockets implementation right? Postman should at least give an option to auto-send some messages for this sort of thing but I will cut them some slack because apparently their WebSocket Request is still in beta development.
I also feel there should be more detailed error info provided by the Soketi server output, but they too I will cut some slack because its a fairly new server app, which is actually great and very stable, if I might add.
Problem description:
I am unable to send SMS from AWS Lambda.
Controller Code
try {
$sms = AwsFacade::createClient('sns');
$result = $sms->publish([
'MessageAttributes' => [
'AWS.SNS.SMS.SenderID' => [
'DataType' => 'String',
'StringValue' => 'CyQuer',
],
'AWS.SNS.SMS.SMSType' => [
'DataType' => 'String',
'StringValue' => 'Transactional',
],
],
'Message' => "Hello John Doe".PHP_EOL."Use following Code to Login:".PHP_EOL."123456",
'PhoneNumber' => $phone,
]);
Log::info($result);
} catch (Exception $e) {
Log::error($e->getMessage());
}
Error message:
via web
{"message": "Internal server error"}Task timed out after 28.00 seconds
via Artisan
Task timed out after 120.00 seconds
Setup
Laravel application running on AWS Lambda using bref/laravel-bridge
IAM user for this application has been created. Locally everything works. Online everything works too except sending SMS.
Tried solutions:
The following packages were tried.
https://github.com/aws/aws-sdk-php
https://github.com/aws/aws-sdk-php-laravel
All the following described approaches worked locally on AWS Lambda but not.
Write AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY directly into config/aws.php
Write AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY directly in the code
$sms = AwsFacade::createClient('sns',[
'credentials' => [
'key' => '********************',
'secret' => '****************************************',
],
'region' => env('AWS_REGION', 'eu-central-1'),
'version' => 'latest',
'ua_append' => [
'L5MOD/' . AwsServiceProvider::VERSION,
],
]);
Giving all IAM users full admin access didn't work either.
Locally, all alternative solutions have also always worked!
Does anyone know the problem and have a solution for it? I've been trying to find a way for more than 24 hours. My last approach would be to rebuild the complete call via CURL and try it out, but I hope someone has/finds a solution.
For anyone facing similar issue, this could be/ most likely is due to lambda being configured in a VPC (most likely to give it access to RDS) and therefore losing internet connection, and by extension of that, access to other AWS services. One of the ways to work around this is to enable internet access to your VPC-bound lambda via setting up a NAT gateway (guide here). Alternatively, you can also use VPC Endpoints if the desired AWS service is supported. More info here.
In my experience, I had issues being unable to use specific credentials for certain clients in the AWS PHP SDK. I've seen that the best solution is to include the IAM permissions needed in the policy statements for the lambda function itself.
In this case for example, you may need to include the relevant statement for SNS like below:
{
"Action": [
"ses:*"
],
"Resource": "*",
"Effect": "Allow"
}
If you're using serverless to deploy, you may define it under provider.iamRoleStatements in the serverless.yml file like below:
provider:
name: aws
iamRoleStatements:
- Effect: Allow
Action:
- sns:*
Resource: '*'
IMPORTANT: Do provide only the minimum permissions that your lambda needs. The above statements are just examples.
After the relevant permissions are applied in this way, you may drop the specific credentials from the constructor of your AWS client.
I've been using a fanless ubuntu server for a couple of months and been transferred my Laravel API Passport to different servers like Ubuntu and for development Windows.
The current version of my api is working fine on these servers. Not until when I transferred the laravel project to my AWS Amazon server and getting the error like this
{
"message": "cURL error 6: Could not resolve host: api-a.mydomain.comoauth (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)",
"exception": "GuzzleHttp\\Exception\\ConnectException",
"file": "/var/www/html/api_tk_a/vendor/guzzlehttp/guzzle/src/Handler/CurlFactory.php",
"line": 185,
"trace": [
{
....some line of multiple errors
}
]
}
I tried to add some line of code to my etc/hosts file
like this
52.***.***.205 api-a.mydomain.com
52.***.***.205 api-b.mydomain.com
or
127.0.0.1 api-a.mydomain.com
127.0.0.1 api-b.mydomain.com
tried also my inet addr:171.*.**.** to put on my hosts file still getting the same error
I have 2 different api that's why there's a and b
But nothing of these works.
Can anyone point out the reason why I'm getting this error.
Some thread says this has nothing to do with the Guzzle but clearly states to the IP and DNS. But I can't find a good way to figure this out.
UPDATE
My .env file has something like this
APP_NAME=Laravel
APP_ENV=local
APP_KEY=base64:1AEVUl5JHMzLkhxU7MDlDHtfZ6KB9UybjzUxLaYp9vg=
APP_DEBUG=true
APP_URL=http://localhost
I tried to put the APP_URL as https://api-a.mydomain.com still not working..
API CODE This code is working fine
$http = new Client;
$response = $http->post(url('oauth/token'), [
'form_params' => [
'grant_type' => 'password',
'client_id' => '2',
'client_secret' => '8mEsN0RZkljKZlbiJFfnNKahcbcOVkoQVG7C2Xwl',
'username' => $user->email,
'password' => $request->password,
'scope' => '',
],
]);
This is my routes/api.php
use Illuminate\Http\Request;
Route::post('/login','Auth\Api\AuthController#login');
EC2 Ping
For ping to work in EC2 instance, you have to open ICMP port in Security Group for your IP. Also for curl, verify have you opened port 8080/80 for your instance.
I found a solution:
I used $http->post('https://api-a.mydomain.com' . '/oauth/token', [
Laravel passport trying to get oauth/token. Guzzle keeps hanging, also on other Guzzle calls
When I deploy my application to a EC2 instance, it fails to fetch messages from my SQS queue. And instead throws an exception with the status code 403 Forbidden, access to the resource {sqs queue} is denied. However, when I run the same code from my local environment my application can fetch messages from the SQS queue.
My application uses the symfony framework and passes pre-configured AWS credentials, for a user who has access to this queue, from the parameters.yml into \Aws\Sqs\SqsClient().
If on the EC2 instance I run aws configure and configure the aws cli with the same credentials the application can pull messages from the SQS queue. I am concerned here because it is like the aws sdk is overriding the credentials I pass it.
As a example the following code even with hard coded parameters which I have checked are valid credentials, returns a 403 when ran on a EC2 instances.
$sqs = new \Aws\Sqs\SqsClient([
[
'key' => '{my key}',
'secret' => '{my secret}'
],
'region' => 'us-east-1',
'version' => 'latest'
]);
$response = $sqs->receiveMessage([
'QueueUrl' => 'https://sqs.us-east-1.amazonaws.com/{my account}/{my queue}'
]);
Does anyone have any suggestions about what may be happening here?
Try with credentials key in config.
$sqs = new \Aws\Sqs\SqsClient([
'credentials' => [
'key' => '{my key}',
'secret' => '{my secret}',
],
'region' => 'us-east-1',
'version' => 'latest'
]);
$response = $sqs->receiveMessage([
'QueueUrl' => 'https://sqs.us-east-1.amazonaws.com/{my accoun}/{my queue}'
]);
This might help you to debug your issue.
Run aws sqs list-queues on command line. If your queue not listed in the result set, that means your AWS key doesn't have permission.
Run aws sqs receive-message --queue-url <queue_url> where queue_url is your queue's complete url received from step 1. You should see all your messages in the queue.
If there are no errors in above both steps, there might be an issue in your application end.
It's a bad practice to store AWS credentials in EC2 instances, It's much better to create an IAM role with sqs:receiveMessage permission then attach that IAM role to your EC2 instance.
I just want to queue an email when an user registers. So I do this when the user posts the registration form:
Mail::queue('emails.activate', $data, function($message) use ($user)
{
$message->from('no-reply#mysite.com', 'Mysite.com');
$message->to($user->email, $user->username)->subject('Welcome');
});
The queue listener is running (php artisan queue:listen) and a supervisor process make sure it will restart if stopped.
It works, the user get the email but the HTTP response when registering is very slow, exactly as I would expect it to be if I was trying to directly send the email. If I comment all the queuing code above, the HTTP response time is just fine.
I use the sync driver in queue.app:
'default' => 'sync',
'connections' => array(
'sync' => array(
'driver' => 'sync',
),
etc...
At last, I run my own private server (Ubuntu) with postfix. Can someone help me figure out why the response is so slow while I'm queuing the email?
The sync driver runs its queued jobs just before Laravel ends its execution. That is why it is called the sync driver, you will need to change it to achieve desired functionality.