I have a Laravel multi tenancy set up, so that each tenant has their own database with the Jobs/Failed Jobs Tables in, And a default database with the list of tenants and their database names. (when a user signs up they get their own database).
I am trying to create a way of running all the jobs in each tenant database with a queue:worker. At the moment the best thing I can think of is to create a Scheduler that will check every minute all the tenants in the default database, and then dispatch a job with that tenant database name. That job then gets picked up by another queue worker, swaps the database connection to the correct database, and then I am trying to run the artisan command:
php artisan queue:work --stop-when-empty --tries=1
When I run this command in the console, it works perfectly and works through the jobs on all the tenant databases.
However, when I fire off the following command using Artisan in the Job:
Artisan::call('queue:work', ['--stop-when-empty' => true, '--timeout' => 30, '--tries' => 1]);
the job always times out and doesn't pick up any jobs in the tenant database.
The way I am setting the database connection dynamically is as follows:
\Config::set('database.connections.tenant.database','tenant_1');
\DB::purge('tenant');
\Config::set('queue.connections.database.connection','tenant');
Is there any reason why the Artisan:call does not work, but the console command does?
Or is there a better way of dynamically running jobs on all the databases other than manually running them.
Fixed it. Used exec() to open run php artisan queue:worker for that tenant, and schedule a command to constantly check the processid, whether its still running or not.
I have a script below that goes through 380 MySQL innodb databases and runs various create table, inserts, updates...etc to migrate schema. It runs from a web server that connects to a cloud database server. I am leaving the migration script out of this question as I don't think it is relevant.
I ran into an issue and I am trying to find a workaround.
I have a 4gb ram cloud database server running MySQL 5.6. I migrated 380 database with 40 tables to 59 tables. About 70% of the way through I got The errors below. It died in the middle of one migration and the server went down. I was watching memory usage and it ran out of memory. It is a database as a service so I don't have root access to server so I don't know all details.
Running queries on phppoint_smg
Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000) at line 355: Lost connection to MySQL server during query
Running queries on phppoint_soulofhalloween
Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
Running queries on phppoint_srvais
Warning: Using a password on the command line interface can be insecure.
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
Here is a simplified version of the PHP script.
db_host = escapeshellarg($db_host);
$db_user = escapeshellarg($db_user);
$db_password = escapeshellarg($db_password);
foreach($databases as $database)
{
echo "Running queries on $database\n***********************************\n";
system("mysql --host=$db_host --user=$db_user --password=$db_password --port=3306 $database < ../update.sql");
echo "\n\n";
}
My questions:
Is there any way to avoid memory usage going up as I do migration? I am doing it one database at a time. Or is the addition of tables and data the reason it goes up?
I was able to use the server afterwords and removed 80 databases and finished the migration. It has 800 mb free; and I expect it to go down to 600mb. Before the migration it was at 500mb
Your PHP sample doesn't use much memory, and it's not running on the Database server, which is the one that went down, right? So the problem is in your configured MySQL parameters.
Based on your Gist, and using a simple MySQL memory calculator, we can see that your MySQL service can use up to 3817MB of memory. If the server only had 4GB when the error happened, it's pretty probable that this was the cause (you need to have some additional memory for the OS and running applications). Increasing the memory or finetuning the server would resolve it. Take a look at the MySQL documentation page on server variables to best understand each value.
However, this may not be the only cause for disconnects/timeouts (but it does seem to be your case, since increasing memory resolved the problem). Another common problem is to underestimate the max_allowed_packet value (16MB in your configuration), because such scripts can easily have queries beyond this value (for example, if you have several values for a single INSERT INTO table ...).
Consider that max_allowed_packet should be bigger than the biggest command you issue to your database (it's not the SQL file, but each command within it, the block between ;).
But please, do consider a more careful tuning, since a bad configured server may suddenly crash, or become unresponsive —while it could perfectly run without adding more memory. I suggest running performance tuning scripts like MySQLTuner-perl that will analyze your data, index usage, slow queries and even propose the adjustments you need to optimize your server.
It's pretty obvious that your migration SQL queries kill the server. Seems that the database simply have to low free RAM for such actions.
Depending on the database filesize and queries it can for sure boost up your RAM usage.
Without knowing exact server specs, the data in the database and the queries you fire there is no exact answer that can help you here.
Instead of spawning lots of processes, generate one file, then run it. Generate the file something like
$out = fopen('tmp_script.sql', 'w');
foreach($databases as $database)
{
fwrite($out, "USE $database;\n");
fwrite($out, "source ../update.sql;\n");
}
fclose($out);
Then, either manually or programatically, do
mysql ... < tmp_script.sql
It might be safer to do it manually so that PHP is out of the way.
One thing you should try to relieve your RAM, as your server is obviously extremely low on RAM, is to force garbage collection after unsetting big arrays once the loop is complete.
I was facing a similar problem with PTHREADS under PHP7 (and 512Go of RAM) that was handling 1024 async connections to MariaDB and Postgresql on a massive server.
Try this for each loop.
//first unset main immediately at loop start:
unset($databases[$key]);
// second unset process and purge immediately
unset($database);
gc_collect_cycles();
Also, set a control to constantly monitor the RAM usage under load to see if this happens on a particular $database. In case your RAM goes too low , set the control to chunk your $database and do multi inserts batches and unset them as they are done. This will purge more RAM and avoid too big array copies before sub inserts loop. This is especially the case if you are using classes with construct. With 4Go, I would tend to set batches of 400 to 500 async inserts max, depends on your insert global length.
If your database server is crashing (or being killed by the oom killer) then the reason is that it has been configured to use more memory than is available on the device.
You forgot to tell us what OS is running on the database nodes.
If you don't have root access to the server then this is the fault of whoever configured it. Memory overcommit should be disabled (requires root access). A tool like mysqltuner will show you how much memory the DBMS is configured to use (requires admin privilege). See also this percona post
I think they are right about ram, but it is worth noting the tools you use is important.
Have you tried http://www.mysqldumper.net/
if you use it (php script) check the settings for php memory limit and let it auto detect.
I used to use http://www.ozerov.de/bigdump/
but its so slow that I dont anymore.
The mysqldumper on the otherhand, is both fast at backups and restores, doesnt crash (if you set memory limit)
I have found this tool to be exceptional.
Updated:
Your comments completely changes the situation.
Here is my updated answer:
Since you have no access to MySQL server, you need to do some alternative approach.
Mandatory remove all special "things" from import file such enclosing transactions, insert delayed / ignored and so on.
Mandatory do SQL's with single statement -
I do not know how inserts look like, but do it single statement - single insert - do not bundle many rows in single statement,
e.g.
instead of
insert into x
(...fields...)values(...single row...),
(...fields...)values(...single row...),
(...fields...)values(...single row...),
(...fields...)values(...single row...)
;
do
insert into x(...fields...)values(...single row...);
insert into x(...fields...)values(...single row...);
insert into x(...fields...)values(...single row...);
insert into x(...fields...)values(...single row...);
Then try these:
You might try to "upload" my.ini with big buffers and so on. It is possible the provider of MySQL server to give you more RAM then. It is service after all :)
You might try to generate file with schema and files with data. Then import schema, then began to import table by table and see where it crashes and resume crashed file.
You might import everything with MyISAM tables. Then you can convert these in InnoDB. alter table x engine=innodb. However, doing so will lost all referential integrity and you will need to enforce it later.
You might import everything with MyISAM tables. Then instead of convert those, you can do
something like this for each table:
alter table x rename to x_myisam;
create table x(...);
insert into x select * from x_myisam;
I believe there is single table that breaks the process. If you find it, you can proceed with it manually. For example import 10000 rows at a time or something.
Alternative approach
If your server is in Amazon AWS or similar service, you can try "scale-out" ("enlarge") the server for the import, and to "scale-down" ("shrink") after import is done.
Old answer
why do you use php script? try create or generate via php a shell script. then run shell script.
also is very important to create huge swap file on the system. here is
one way to do it. It might not work on older systems:
sudo su # became root
cd /somewhere
fallocate -l 16gb file001
mkswap file001
chmod 0 file001
swapon file001
Then execute the php or shell script.
Once is done, you can swapoff and remove the file or make it permanent
in fstab.
Let me know if i need to clarify anything.
I have 4 millions of data, which some column I need to migrate in a new MySQL database.
I need a PHP script which I can run without any memory failure and any data.
I found it will be better to use multibatch process. Can anyone suggest what is the better option and how to do it in PHP and linux ?
If the two databases are on the same server, then you could get good performance with a SELECT INTO, even across databases. Otherwise, you could export the data, and use load data in order to import it into the new structure 13.2.6 LOAD DATA INFILE Syntax.
If you're really determined to use php, then I'd suggest using background workers on a queue - this way each worker can process a small amount of records each (maybe 1000 - avoiding memory problems). This would take a while longer to run, and also to setup. We've used php-resque for something similar (running data from a DB into redis) with good success.
I now have a stable Beanstalkd and Laravel 4 Queue setup running on one machine. My question is, how can I install the Laravel 4 workers on a second machine and make them listen to my Beanstalkd? Maybe a very obvious question to some but I can't figure it out. I noticed there was a connection field in the php artisan queue:listen command. Do I have to use that?
how can I install the Laravel 4 workers on a second machine and make them listen to my Beanstalkd?
You'll need to have a working instance of your laravel application on the same server as the listener/workers.
This means deploying your application both to the web server and to server that is listening for jobs.
Then, on the listening server, you can call php artisan queue:listen in order to listen for new jobs and create a worker to handle the job.
I noticed there was a connection field in the php artisan queue:listen command. Do I have to use that?
On top of the above question, and similar to most artisan commands, you will likely also need to define which environment the queue:listen command should use:
$ php artisan queue:listen --env=production
In this way, your laravel app that is used to handle the workers (the app on the listening server) will know what configurations to use, including knowing what database credentials to use. This also likely means that both the web server and your job/listening server needs to have access to your database.
Lastly, you could also create 2 separate Laravel applications - One for your web application and one purely to handle processing job. Then they could each have their own configuration, and you'll have 2 (probably smaller?) code bases. But still, you'll have 2 code bases instead of 1.
In that regard, do whatever works best for your situation.
In my application (Laravel 5.1) I have different commands which are fairly simple:
Get ip from RabbitMQ
Attempt to establish a connection to that ip
Update DB entry
Since it can take a while to connect to the ip (up to 30sec). I am forced to launch multiple instances of that script. For that I've created console commands which are launched by systemd.
Once I went in production I was quite surprised by the amount of memory which these scripts were consuming. Each script (as reported by memory_get_usage) was using about 21mb on startup. Considering that ideally I would need to run about 50-70 on those scripts on the same time it's a quiet big issue for me.
Just as a test I've installed clean Laravel 5.1 project and launched its default artisan inspire command, php reported 19mb.
Is it normal for laravel or am I missing something crucial here?