Background
I'm given a laravel app who's queue is configured by forge. And so I'm trying to make it run now on my localhost which is OSX
This is what I did:
installed beanstalk on OSX
ran beanstalk server on my console: $ beanstalk
ran the laravel worker command
$ php artisan queue:work beanstalkd --env=local --queue=default
I then did some actions that create jobs, but they never got processed. I used telnet as a poor man's monitor for beanstalk like so:
$ telnet localhost 11300
Trying ::1...
Connected to localhost.
Escape character is '^]'.
stats
OK 923
---
current-jobs-urgent: 0
current-jobs-ready: 3
current-jobs-reserved: 0
current-jobs-delayed: 0
current-jobs-buried: 0
cmd-put: 3
cmd-peek: 0
cmd-peek-ready: 0
cmd-peek-delayed: 0
cmd-peek-buried: 0
cmd-reserve: 0
cmd-reserve-with-timeout: 652
cmd-delete: 0
cmd-release: 0
cmd-use: 1
cmd-watch: 0
cmd-ignore: 0
cmd-bury: 0
cmd-kick: 0
cmd-touch: 0
cmd-stats: 8
cmd-stats-job: 0
cmd-stats-tube: 0
cmd-list-tubes: 0
cmd-list-tube-used: 0
cmd-list-tubes-watched: 0
cmd-pause-tube: 0
job-timeouts: 0
total-jobs: 3
max-job-size: 65535
current-tubes: 2
current-connections: 2
current-producers: 0
current-workers: 1
current-waiting: 0
total-connections: 8
pid: 56692
version: 1.10
rusage-utime: 0.010171
rusage-stime: 0.031001
uptime: 2023
binlog-oldest-index: 0
binlog-current-index: 0
binlog-records-migrated: 0
binlog-records-written: 0
binlog-max-size: 10485760
id: 3620777b4ee08cdc
Question
I can see that 3 jobs are ready.. but i have no idea how to dispatch them (or for that matter, find out what jobs are exactly inside of them). What should I do?
You can use the beanstalk console web app https://github.com/ptrofimov/beanstalk_console.
I would also log some info in a separated log file, to inform me about some values and details happening within the running job. Then I tail that log file while executing the queued jobs and watching the beanstalk console interface.
Related
Laravel app works fine if It's started manually with the command "php artisan octane:start".
So I decided to run with supervisor and I discovered that all external HTTP requests were rejected with curl error 7. Below is a test configuration with curl
------ curl-test.conf -------
[program:curl_test]
process_name=%(program_name)s
command=/bin/sh -c "curl -v google.com"
autostart=true
loglevel=debug
autorestart=false
stdout_logfile=/tmp/curl-test.log
redirect_stderr=true
------------------ curl-test.log -----------
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 216.58.223.206...
* TCP_NODELAY set
* Immediate connect fail for 216.58.223.206: Software caused connection abort
* Closing connection 0
curl: (7) Couldn't connect to server
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 216.58.223.206...
* TCP_NODELAY set
* Immediate connect fail for 216.58.223.206: Software caused connection abort
* Closing connection 0
curl: (7) Couldn't connect to server
This happened as a result of an internet firewall restriction on my mac. This issue is closed now
https://www.php.net/manual/en/features.commandline.webserver.php
From 7.4+ onwards, I assume PHP built in server is capable of handling multiple incoming requests, up to and equal to the environment variable: PHP_CLI_SERVER_WORKERS
I have a web app which is composed of a couple dozen AJAX powered lists, on the first page load, using the built-in server it slows to a crawl, usually fails due to timeout in PHP scripts.
I read the above feature, added environment variable (PHP is a docker container), and shell'ed into my container, did a top/ps I can now see X number of PHP processes:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 722864 21932 14448 S 0.3 1.1 0:09.91 symfony
20 root 20 0 210892 48188 36544 S 0.0 2.4 0:00.79 php7.4
21 root 20 0 205676 33460 24500 S 0.0 1.6 0:00.22 php7.4
22 root 20 0 208212 40908 29640 S 0.0 2.0 0:00.42 php7.4
23 root 20 0 210644 42236 30836 S 0.0 2.1 0:00.61 php7.4
24 root 20 0 208764 40784 31176 S 0.0 2.0 0:01.14 php7.4
25 root 20 0 205804 33588 24508 S 0.0 1.6 0:00.22 php7.4
...
I am using Symfony to start a dev server, but no matter what I do none of the processes seem to be carrying any of the load? What am I missing?
I switched to the built-in PHP server about 2 years ago, from NGINX. It made my vagrant setup easier (now docker) but my performance took a hit, which I've dealt with. I'd like to improve the responsiveness of the app using this approach if possible.
Any ideas?
Background
I'm running a laravel 5.3 powered web app on nginx. My prod is working fine (running on an AWS t2.medium) and my staging has been running fine until it recently got overloaded. Staging is a t2.micro.
Problem
The problem happened when started trying to hit the api endpoints, and started getting this error:
503 (Service Unavailable: Back-end server is at capacity)
using htop we got the following:
so we found that our beanstalk queues are taking insane amounts of memory.
What I have tried
we used telnet to peek into what's going inside beanstalk:
$~/beanstalk-console$ telnet localhost 11300
Trying 127.0.0.1...
Connected to staging-api-3.
Escape character is '^]'.
stats
OK 940
---
current-jobs-urgent: 0
current-jobs-ready: 0
current-jobs-reserved: 0
current-jobs-delayed: 2
current-jobs-buried: 0
cmd-put: 451
cmd-peek: 0
cmd-peek-ready: 0
cmd-peek-delayed: 0
cmd-peek-buried: 0
cmd-reserve: 0
cmd-reserve-with-timeout: 769174
cmd-delete: 449
cmd-release: 6
cmd-use: 321
cmd-watch: 579067
cmd-ignore: 579067
cmd-bury: 0
cmd-kick: 0
cmd-touch: 0
cmd-stats: 1
cmd-stats-job: 464
cmd-stats-tube: 0
cmd-list-tubes: 0
cmd-list-tube-used: 0
cmd-list-tubes-watched: 0
cmd-pause-tube: 0
job-timeouts: 0
total-jobs: 451
max-job-size: 65535
current-tubes: 2
current-connections: 1
current-producers: 0
current-workers: 0
current-waiting: 0
total-connections: 769377
pid: 1107
version: 1.10
rusage-utime: 97.572000
rusage-stime: 274.560000
uptime: 1609870
binlog-oldest-index: 0
binlog-current-index: 0
binlog-records-migrated: 0
binlog-records-written: 0
binlog-max-size: 10485760
id: 906b3629b01390dc
hostname: staging-api-3
nothing there seems to be concerning..
Question
I would like to have a more transparent look into whats going on in these jobs (ie what are the jobs exactly?) I know Laravel Horizon provides such services but only comes on Laravel 5.5. I researched what queue monitors are out there and tried to install beanstalk console. Right now when I installed it.. I'm getting 52.16.%ip%.%ip% took too long to respond. which I think is expected considering that the whole machine is already jammed.
I figure if I reboot the machine I can install beanstalk_console just fine, but then i'll lose the opportunity to investigate what's causing the problem this time around, sine it's a rare occurrence.. What else can I do to investigate and see what exactly are the jobs that are draining the CPU and why
Update
I restarted the instance, and the apis work now, but i'm still getting CPU is at 100%.. what am I missing?
I realize there are about 10 of these questions out there but none fit me completely.
Steps completed:
Installed memcache
installed php memcache module
updated laravel config to use memcache
Restarted server
php info results:
memcache.allow_failover 1 1
memcache.chunk_size 8192 8192
memcache.default_port 11211 11211
memcache.default_timeout_ms 1000 1000
memcache.hash_function crc32 crc32
memcache.hash_strategy standard standard
memcache.max_failover_attempts 20 20
memcached-tool results:
accepting_conns 1
auth_cmds 0
auth_errors 0
bytes 0
bytes_read 14
bytes_written 1096
cas_badval 0
cas_hits 0
cas_misses 0
cmd_flush 0
cmd_get 0
cmd_set 0
cmd_touch 0
conn_yields 0
connection_structures 6
crawler_reclaimed 0
curr_connections 5
curr_items 0
decr_hits 0
decr_misses 0
delete_hits 0
delete_misses 0
evicted_unfetched 0
evictions 0
expired_unfetched 0
get_hits 0
get_misses 0
hash_bytes 524288
hash_is_expanding 0
hash_power_level 16
incr_hits 0
incr_misses 0
libevent 2.0.21-stable
limit_maxbytes 268435456
listen_disabled_num 0
lrutail_reflocked 0
malloc_fails 0
pid 12022
pointer_size 64
reclaimed 0
reserved_fds 20
rusage_system 0.043400
rusage_user 0.065101
threads 4
time 1421438137
total_connections 7
total_items 0
touch_hits 0
touch_misses 0
uptime 2607
version 1.4.21
It is in php -m as "memcache"
However, when i go into php artisan tinker and try to do any caching I get the typical Fatal error: Class 'Memcached' not found in vendor/laravel/framework/src/Illuminate/Cache/MemcachedConnector.php on line 44
TL;DR;
I have confirmed install of memcache through multiple methods. Confirmed the module for php is installed. Still not allowing me to use memcached class.
If you are in a ubuntu environment, try to install Memcached with this:
sudo apt-get install php5-memcached
After that restart your server with
sudo service lighttpd restart
or
sudo service apachectl2 restart
or
sudo service nginx restart
Memcache and Memcached are two different PHP extensions. Memcache is the older deprecated one. Memcached is a much newer and fully supported extension.
Check out http://pecl.php.net/package/memcached
You may need to also install libmemcached https://launchpad.net/libmemcached/+download
apt-get install php-memcached
Solved the issue for "Class MemCached not found" coming from Laravel.
In Laravel/Lumen 5.4 just replace the CACHE_DRIVER=file in .env file, the artisan command will work perfectly, But you will not get all the command as same as laravel.
Trying to wrap Pheanstalk in my PHP job base class. I'm testing the reserve and reserve with delay functionality and I've found that I can reserve a job from a second instances of my base class without the first instance releasing the job or the TTR timing out. This is unexpected since I was thinking this is exactly the thing job queues are supposed to prevent. Here are the beanstalkd commands for the first put and the first reserve along with time stamps. I also do a stats-job request at the end:
01:40:15: Sending command: use QueuedCoreEvent
01:40:15: Got response: USING QueuedCoreEvent
01:40:15: Sending command: put 1024 0 300 233
a:4:{s:9:"eventName";s:21:"ReQueueJob_eawu7xr9bi";s:6:"params";a:2:{s:12:"InstanceName";s:21:"ReQueueJob_eawu7xr9bi";s:17:"aValueToIncrement";i:123456;}s:9:"behaviors";a:1:{i:0;s:22:"BehMCoreEventTestDummy";}s:12:"failureCount";i:0;}
01:40:15: Got response: INSERTED 10
01:40:15: Sending command: watch QueuedCoreEvent
01:40:15: Got response: WATCHING 2
01:40:15: Sending command: ignore default
01:40:15: Got response: WATCHING 1
01:40:15: Sending command: reserve-with-timeout 0
01:40:15: Got response: RESERVED 10 233
01:40:15: Data: a:4:{s:9:"eventName";s:21:"ReQueueJob_eawu7xr9bi";s:6:"params";a:2:{s:12:"InstanceName";s:21:"ReQueueJob_eawu7xr9bi";s:17:"aValueToIncrement";i:123456;}s:9:"behaviors";a:1:{i:0;s:22:"BehMCoreEventTestDummy";}s:12:"failureCount";i:0;}
01:40:15: Sending command: stats-job 10
01:40:15: Got response: OK 162
01:40:15: Data: ---
id: 10
tube: QueuedCoreEvent
state: reserved
pri: 1024
age: 0
delay: 0
ttr: 300
time-left: 299
file: 0
reserves: 1
timeouts: 0
releases: 0
buries: 0
kicks: 0
So far, so good. Now I do another reserve from a second instance of my base class followed by another stats-job request. Notice the time stamps are within the same second, nowhere near the 300 second TTR I've set. Also notice in this second stats-job printout that there are 2 reserves of this job with 0 timeouts and 0 releases.
01:40:15: Sending command: watch QueuedCoreEvent
01:40:15: Got response: WATCHING 2
01:40:15: Sending command: ignore default
01:40:15: Got response: WATCHING 1
01:40:15: Sending command: reserve-with-timeout 0
01:40:15: Got response: RESERVED 10 233
01:40:15: Data: a:4:{s:9:"eventName";s:21:"ReQueueJob_eawu7xr9bi";s:6:"params";a:2:{s:12:"InstanceName";s:21:"ReQueueJob_eawu7xr9bi";s:17:"aValueToIncrement";i:123456;}s:9:"behaviors";a:1:{i:0;s:22:"BehMCoreEventTestDummy";}s:12:"failureCount";i:0;}
01:40:15: Sending command: stats-job 10
01:40:15: Got response: OK 162
01:40:15: Data: ---
id: 10
tube: QueuedCoreEvent
state: reserved
pri: 1024
age: 0
delay: 0
ttr: 300
time-left: 299
file: 0
reserves: 2
timeouts: 0
releases: 0
buries: 0
kicks: 0
Anyone have any ideas on what I might be doing wrong? Is there something I have to do to tell the queue I want jobs to only be accessed by one worker at a time? I'm doing an "unset" on the pheanstalk instance as soon as I get the job off the queue which I believe terminates the session with beanstalkd. Could this cause beanstalkd to decide the worker has died and automatically release the job without a timeout? I'm uncertain of how much beanstalkd relies on session state to determine worker state. I was assuming that I could open and close sessions with impunity and that job id was the only thing that beanstalkd cared about to tie job operations together, but that may have been foolish on my part... This is my first foray into job queues.
Thanks!
My guess is your first client instance closed the TCP socket to the beanstalkd server before the second one reserved the job.
Closing the TCP connection implicitly releases the job back onto the queue. These implicit releases (close connection, quit command etc) do not seem to increment the releases counter.
Here's an example:
# Create a job, reserve it, close the connection:
pda#paulbookpro ~ > telnet 0 11300
Trying 0.0.0.0...
Connected to 0.
Escape character is '^]'.
put 0 0 600 5
hello
INSERTED 1
reserve
RESERVED 1 5
hello
^]
telnet> close
Connection closed.
# Reserve the job, stats-job shows two reserves, zero releases.
# Use 'quit' command to close connection.
pda#paulbookpro ~ > telnet 0 11300
Trying 0.0.0.0...
Connected to 0.
Escape character is '^]'.
reserve
RESERVED 1 5
hello
stats-job 1
OK 151
---
id: 1
tube: default
state: reserved
pri: 0
age: 33
delay: 0
ttr: 600
time-left: 593
file: 0
reserves: 2
timeouts: 0
releases: 0
buries: 0
kicks: 0
quit
Connection closed by foreign host.
# Reserve the job, stats-job still shows zero releases.
# Explicitly release the job, stats-job shows one release.
pda#paulbookpro ~ > telnet 0 11300
Trying 0.0.0.0...
Connected to 0.
Escape character is '^]'.
reserve
RESERVED 1 5
hello
stats-job 1
OK 151
---
id: 1
tube: default
state: reserved
pri: 0
age: 46
delay: 0
ttr: 600
time-left: 597
file: 0
reserves: 3
timeouts: 0
releases: 0
buries: 0
kicks: 0
release 1 0 0
RELEASED
stats-job 1
OK 146
---
id: 1
tube: default
state: ready
pri: 0
age: 68
delay: 0
ttr: 600
time-left: 0
file: 0
reserves: 3
timeouts: 0
releases: 1
buries: 0
kicks: 0
quit
Connection closed by foreign host.
I got the same issue. The problem was in multiple connections opened to beanstalkd.
use Pheanstalk\Pheanstalk;
$pheanstalk = connect();
$pheanstalk->put(serialize([1]), 1, 0, 1800);
/** #var Job $job */
$job = $pheanstalk->reserve(10);
print_r($pheanstalk->statsJob($job->getId()));
// state reserved but
// only those connection that reserved a job can resolve/update it
$pheanstalk2 = connect();
print_r($pheanstalk->statsJob($job->getId()));
$pheanstalk2->delete($job);
// new connection opened in same process still cannot update the job
// PHP Fatal error: Uncaught Pheanstalk\Exception\ServerException: Cannot delete job 89: NOT_FOUND in /var/www/vendor/pda/pheanstalk/src/Command/DeleteCommand.php:45
function connect() {
$pheanstalk = new Pheanstalk(
'localhost',
11300,
5
);
return $pheanstalk;
}