Strange cause for 'MySQL server has gone away' - php

Despite many questions regarding this MySQL error, I haven't found a solution to my problem.
My (PHP) application requires the generation of a DAL (database access layer). This is done through a php script called from the command line. This script reads all tables from the information schema and generates files according to that schema.
This has been working for as long as I can remember, until recently. Every time I start this generation process, my 'normal' application loses connection to the MySQL server.
The strange thing is, that I can however connect to the database using a program such as Sequel Pro. Same database, same connection settings, same machine.
Also, when I try to restart MySQL on the command line using sudo /usr/local/mysql/support-files/mysql.server restart I get the error:
ERROR! MySQL server PID file could not be found
Then Starting MySQL takes forever before I kill it. The only solution I found is to completely reboot my computer. It then works fine again until I rerun the generation script.
So, résumé, I've got 2 problems:
When I run the script using the information schema, my normal application loses the ability to connect to the MySQL server, while other applications still can.
When that happens, I'm unable to restart MySQL to fix the problem.
The second problem is obviously way less important than the first one. I'd rather not have to restart MySQL at all.
What the generation script does:
First:
SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE
TABLE_SCHEMA=DATABASE() %s AND SUBSTRING(TABLE_NAME, 1, 1)!='_'"
Then a loop in PHP to loop over all tables.
Then this:
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_SCHEMA=DATABASE()
AND TABLE_NAME=:tableName
The rest of the script then generates PHP files from that information.

Related

MySQL Automate backups

I am trying to automate the backups of our MySQL databases.
The databases are hosted on shared servers, and can be restarted at anytime, which mean CRON jobs won't be persistent (according to support at my web hosts).
I am currently running the jobs manually via MySQL Workbench at given intervals.
I am trying to automate this process, but I cannot fathom how to do it. There seems to be no options in MySQL Workbench, and Google seems to yield nothing.
I have attempted to run mysqldump on my local machine, with the view to creating some kind of script to do this from my machine. But I get an error - mysqldump: Got error: 2049: Connection using old (pre-4.1.1) authentication protocol refused (client option 'secure_auth' enabled) when trying to connect - which I can't seem to disable at the server end or override at my end.
Any advice?
The standard automatic backup for MySQL is the socalled "MySQL Enterprise Backup" (MEB). MySQL Workbench works with MEB, but as the name indicates this solution is only available for MySQL enterprise servers.
Another solution would be to run a cron job on the target server (using mysqldump). Missed jobs will be executed after the system is up again, so this is a reliable solution too.
If you cannot install a cron job on the target machine then you will have no choice but to manually trigger the backup either with MySQL Workbench or on the commandline.

Flush, Lock MySQL tables and Copy Data Folder

I'm thinking about building a php script that flushes, locks and copys a mysql data folder. Because I need to lock the tables and typical dump takes 5 minutes plus, I was thinking if I do a flush, lock and file copy of the data folder, it should be quicker. Anyone any experience of this and know if this is a viable solution?
Look to a XtraBackup also if you are planning to do non-stop backups of your data.
The MySQL devs are way ahead of you. The rolled your exact method with some syntactic sugar and proper error checking in a command called mysqlhotcopy.
Might be too late now but what phase does all this break down into? If your server is spending most of the five minutes copying the files instead of the actual flushing then your problem is simply a slow disk.
I think the best answer to the question is the following Windows command:
set bin=C:\Program Files\MySQL\MySQL Server 5.6\bin
"%bin%/mysql" -e "FLUSH TABLES WITH READ LOCK; UNLOCK TABLES;" --user=UserName --password=Password DatabaseName
This quickly forces all MySQL data for a database to its MySQL three or four files in the MySQL data folder, from which they can be copied to some other folder. You'll have to customize this command for your particular version of MySQL and for your database and admin user.
I couldn't get the other answers to work, but this BAT/CMD command works fast and well in my experience.
The only other suggestion I can make is to use the MySQL Workbench (which comes with MySQL) to Stop the MySQL server. When it is stopped it is flushed to disk. Don't forget to Start your MySQL server when you are finished using the files directly (at which time MySQL reads the database files from disk).
Note: if you simply copy the data files for a database to the data folder of another instance of MySQL that is already open, you won't see the data in MySQL applications! You would think that MySQL would check the date/time modified of the disk files to detect updates, but it doesn't. And it doesn't keep its files locked, showing Windows that they are in use, which they are. So strange.

Build database with .sql file with PHP

Ok, i have searched high and low and have been unable to find an answer that works for what i am trying to do. I have two databases, we'll call them DB1 and DB2. I have a cron job that runs every night at 4am that backs up DB1 and stores its data in a SQL file archive, we'll call this file db_backup.sql. The file is stored in a folder on the server, we'll call it ROOT/backups/db_backup.sql.
Info
database names: DB1 and DB2
backup filename: db_backup.sql
backup file path: ROOT/backups/db_backup.sql
What i'm trying to do:
I want to use the db_backup.sql file to build DB2. I am basically trying to set up database replication where i replicate DB1 out to DB2. Don't know of any other way to do this on shared hosting servers than what i'm trying to explain. I am trying to use php to import the db_backup.sql file into DB2.
My Environment:
The website and databases are on a shared hosting account with godaddy (yes, i would love to get dedicated servers to set up real replication, but can't afford it for now). The databases are mysql in phpmyadmin.
Is this something that is possible? Any help would be greatly appreciated! Let me know if you have any questions as well.
I'm not sure if I understand your problem. You need to copy the db_backup file to the second host where you have access to the database and load the sql file.
From a shell:
mysql -hhost -uusername -ppassword databasename < db_backup.sql
This will restore the tables on the second machine.
Should be as simple as, setting up a CRON job on server 2 to call a script on Server 1 which dishes out the SQL file, then that cron job script would import / rebuild the DB.
Make sure to require a hash (via get or post) before giving out the DB SQL on server 1, or else anyone could read it / have your database dump.
Can you not avoid PHP altogether and connect directly to the database remotely to perform the backup either via the command line or using a scripting language more suitable for long running processes?

PHP & MySQL on Mac OS X: Access denied for GUI user

I have just installed and configured Apache, MySQL, PHP and phpMyAdmin on my Macbook in order to have a local development environment. But after I moved one of my projects over to the local server I get a weird MySQL error from one of my calls to mysql_query():
Access denied for user
'_securityagent'#'localhost' (using
password: NO)
First of all, the query I'm sending to MySQL is all valid, and I've even testet it through phpMyAdmin with perfect result. Secondly, the error message only happens here while I have at least 4 other mysql connections and queries per page. This call to mysql_query() happens at the end of a really long function that handles data for newly created or modified articles. This basically what it does:
Collect all the data from article form (title, content, dates, etc..)
Validate collected data
Connect to database
Dynamically build SQL query based on validated article data
Send query to database before closing the connection
Pretty basic, I know. I did not recognize the username "_securityagent" so after a quick search I came across this from an article at Apple's Developer Connection talking about some random bug:
Mac OS X's security infrastructure gets around this problem by running its GUI
code as a special user, "_securityagent".
So as suggested by Frank in the comments I put a var_dump() on all variables used in the mysql_connect() call, and every time it returns the correct values (where username is not "_securityagent" of course). Thus I'm wondering if anyone has any idea why 'securityagent' is trying to connect to my database - and how I can keep this error from occurring when I call mysql_query().
If username is not specified explicitly, MySQL tries to guess it by using name of current system user.
You don't have to accept that, you just need to specify desired username explicitly.
How – that depends how you're connecting. In case of phpMyAdmin it's config.inc.php, add line like:
$cfg['Servers'][0]['user'] = 'Eirik';
(see manual)
Did you set up your local AMP server using a pre-made package, or did you install MySQL, PHP, etc. through the respective OS-specific download packages? Setting up Apache, MySQL, and PHP4/5 can be a real PITA.
If you're having problems with your setup I'd recommend MAMP. It's a nifty all-in-one package that really does the trick. You can still access all the config files you want, and everything is contained in the MAMP folder instead of spread all over the system. If Apple upgrades the pre-installed version of Apache/PHP, your machine-specific config wouldn't be overridden as it would in the case of using pre-installed Apache/PHP.

How to download a live MySQL db into a local test db on demand, without SSH?

I have a fairly small MySQL database (a Textpattern install) on a server that I do not have SSH access to (I have FTP access only). I need to regularly download the live database to my local dev server on demand; i.e., I would like to either run a script and/or have a cron job running. What are some good ways of doing this?
Some points to note:
Live server is running Linux, Apache 2.2, PHP 5.2 and MySQL 4.1
Local server is running the same (so using PHP is an option), but the OS is Windows
Local server has Ruby on it (so using Ruby is a valid option)
The live MySQL db can accept remote connections from different IPs
I cannot enable replication on the remote server
Update: I've accepted BlaM's answer; it is beautifully simple. Can't believe I didn't think of that. There was one problem, though: I wanted to automate the process, but the proposed solution prompts the user for a password. Here is a slightly modified version of the mysqldump command that passes in the password:
mysqldump -u USER --password=MYPASSWORD DATABASE_TO_DUMP -h HOST > backup.sql
Since you can access your database remotely, you can use mysqldump from your windows machine to fetch the remote database. From commandline:
cd "into mysql directory"
mysqldump -u USERNAME -p -h YOUR_HOST_IP DATABASE_TO_MIRROR >c:\backup\database.sql
The program will ask you for the database password and then generate a file c:\backup\database.sql that you can run on your windows machine to insert the data.
With a small database that should be fairly fast.
Here's what I use. This dumps the database from the live server while uploads it to the local server.
mysqldump -hlive_server_addresss -ulive_server_user -plive_server_password --opt --compress live_server_db | mysql -ulocal_server_user -plocal_server_password local_server_db
You can run this from a bat file. You can ever use a scheduled task.
Is MySQL replication an option? You could even turn it on and off if you didn't want it constantly replicating.
This was a good article on replication.
I would create a (Ruby) script to do a SELECT * FROM ... on all the databases on the server and then do a DROP DATABASE ... followed by a series of new INSERTs on the local copy. You can do a SHOW DATABASES query to list the databases dynamically. Now, this assumes that the table structure doesn't change, but if you want to support table changes also you could add a SHOW CREATE TABLE ... query and a corresponding CREATE TABLE statement for each table in each database. To get a list of all the tables in a database you do a SHOW TABLES query.
Once you have the script you can set it up as a scheduled job to run as often as you need.
#Mark Biek
Is MySQL replication an option? You could even turn it on and off if you didn't want it constantly replicating.
Thanks for the suggestion, but I cannot enable replication on the server. It is a shared server with very little room for maneuver. I've updated the question to note this.
Depending on how often you need to copy down live data and how quickly you need to do it, installing phpMyAdmin on both machines might be an option. You can export and import DBs, but you'd have to do it manually. If it's a small DB (and it sounds like it is), and you don't need live data copied over too often, it might work well for what you need.

Categories