Am Using Master master MySql replication
its working fine. Both Server added
replicate-do-db=DB1
replicate-do-db=DB2
But I have n number of databases and going to increase day by day. It is much difficult to add databases in my.cnf every day. I used
replicate-ignore-db=information_schema in both server but that doesn't work.
Actually what I want how to replicate all database and upcoming databases automatically
if you have no filters in place it will replicate everything by default. run SHOW SLAVE STATUS on each side and look for the slave_IO_state, slave_IO_running,slave_SQL_running,last_xxx_error fields for clues as to what is happening. – IGGt
Slave_IO_Running Yes , Slave_SQL_Running No,Slave_IO_State Waiting for master to send event,Last_Error Error 'Can't drop database ,Last_Errno 1008
your suggestions helped me.Now ,Its working fine – Janmabhumi
Related
I have an AWS EC2 instance with DUAL-CORE and 4 GB Memory. I have setup my Apache2 HTTP server running PHP "7.0.30" and MySQL "Ver 14.14 Distrib 5.7.22".
There are various devices that are sending GET/POST request to my Http server. Each post and get request using select and update queries.
Right now, there are around 200 devices which are hitting my Http server simultaneously and hitting SQL queries of select and update together. These hits contain data in JSON formats.
The problem is that my MYSQL server has become too much slow. It takes long time to gather data from select queries and load pages.
From phpMyAdmin, I see a number of sleep processes in status for queries. I also have tuned various parameters of my SQL server but no result.
One of the major query that is taking time is update query which is updating long text data in table and is coming from device in every 60 seconds simultaneously and we see its processes empty after a long period of time in MYSQL server status.
Is there a way to optimize it using SQL parameters to keep MYSQL server fast even with 1000s of queries with multiple connections coming to update the table column having long text ?
Most of the Global variables are with default values. I also tried changing values of Various Global variables but it didn't produce any result.
How can I reduce this slow processing of queries?
P.S: I believe the issue is due to Update queries. I have tuned Select queries and they seems fine. But, for UPDATE queries, I see sleep of upto 12 seconds in Processes tab of phpMyAdmin.
I have added link to the image having this issue
(Here, you can see sleeps of even 13 seconds, all in UPDATE queries) :
Here is the PasteBin for the query of an UPDATE operation:
https://pastebin.com/kyUnkJmz
That is ~25KB for the JSON! (Maybe 22KB if backslashes vanish.) And 40 inserts/sec, but more every 2 minutes.
I would like to see SHOW CREATE TABLE, but I can still make some comments.
In InnoDB, that big a row will be stored 'off record'. That is, there will be an extra disk hit to write that big string elsewhere.
Compressing the JSON should shrink it to about 7K, which may lead to storing that big string inline, thereby cutting back some on the I/O. Do the compression in the client to help cut back on network traffic. And make the column a BLOB, not TEXT.
Spinning drives can handle about 100 I/Os per second.
The 200 devices every 5 seconds needs to average 40 writes/second in order to keep up. That's OK.
Every 2 minutes there are an extra 40 writes. This may (or may not) push the amount of I/O past what the disk can handle. This may be the proximate cause of the "updating for 13 seconds" you showed. That snapshot was taken shortly after a 2-minute boundary?
Or are the devices out of sync? That is do the POSTs come all at the same time, or are they spread out across the 2 minutes?
If each Update is a separate transaction (or you are running with autocommit=ON), then there is an extra write -- for transactional integrity. This can be turned off (tradeoff between speed and security): innodb_flush_log_at_trx_commit = 2. If you don't mind risking 1 second's worth of data,this may be a simple solution.
Is anything else going on with the table? Or is it just these Updates?
I hope you are using InnoDB (which is what my remarks above are directed toward), because MyISAM would be stumbling all over itself with fragmentation.
Long "Sleeps" are not an issue; long "Updates" are an issue.
More
Have an index on usermac so that the UPDATE does not have to slog through the entire table looking for the desired row. You could probably drop the id and add PRIMARY KEY(usermac).
Some of the comments above are off by a factor of 8 -- there seem to be 8 JSON columns in the table, hence 200KB/row.
I have a Raspberry Pi Model B+, which means it has 512MB memory. I'm trying to update an SQL database on my Pi with PHP and I'm copying over transactions from a third party via their API. It's also a LEMP server setup if that makes a difference.
The issue is the page when run updates transactions (1,900) but the page doesn't load, i.e. show the summary information, I simply get a blank page.
I admit this is the initial stage and the code should not have to update more than 100 transactions at a time after the initial copying of old transaction to my Pi but I'm curious as to what the issue is exactly and how to manage it.
The code works fine with a couple of accounts where there are only a few hundred transactions per account. The code finishes and shows a few lines of info of how many accounts updated and how many transactions.
The code doesn't finish when it gets to an account with nearly 2,000 transactions.
Things noticed:
Initially just running the code on the one account the code updated 500 transactions and then stopped/crashed etc.
Increasing the www.conf max_children setting from 5 to 15 (even tried 50) the code now updates all 2,000 transactions but the code doesn't finish, meaning the page doesn't show summary information, no header (which contains buttons etc.), just a blank screen.
I've tried adding a set_time_limit to the php code but doesn't seem to do much.
had a look in the php, mysql etc. error logs in the /var/log/messages directory and nothing is in the logs apart from standard events.
I'm happy to accept the limitations of my Pi but would love to learn why it's failing and how to manage it.
Haven't put the code as there's a lot but the logic is as below but let me know what you would like to see:
Retrieve all transactions from 3rd party API and store in array.
Process array into a new array that only contains the needed data, formatting and editing certain fields.
Cycle through the array and insert it into the SQL database via a prepared statement. (I briefly saw there sql transactions but not
looked into it)
Count the number of accounts updated and number of transactions.
Print summary information.
I am hosting a game server on my LAN and I would like to show players if the server is online or not. Currently, the database it uses shows the time that the database turned on, but if the DOS windows that run the game are closed, the game is closed.. but the database doesn't reflect that. What I would like to do is add a field to the database for an entry that is updated every 15 minutes, showing if the server is on or not.
UPDATE `server` SET lastupdated='time()'
Clearly not identical to that.. but that's the idea I'm going after. Then my website will show as online as long as the lastupdated is no more than 15 minutes old. I just have absolutely no idea where to start or how to create this.
The game server is on my computer, but the database is on my web host. So I can't run a local mysql query either. Any help would be awesome.
If the column doesn't already exist, you're probably going to have to use an alter table to add a column to db_name.table_name.
Once the column is added, I think your DB would now be ready to accept queries (updates/selects) to display the last updated time.
Hello guys I need an advice with these situation :
For example I have a free classified posting website where in a user can post classified ads..
The ad would be listed on the website for a maximum of 30 days then on 31st day it will automatically be deleted on the database as well as the images on the server.. The question is :
$db_ad_tbl('id','user_id','title','description',timestamp);
What is the right approach for doing this?
Can anyone suggest tutorials/links that covers this the same situation?
Another approach that does not require cron is to use MySQL events. If you can come up with the correct query, you can set it as a recurring event. phpMyAdmin 4.0.x supports events handling from the interface.
See http://dev.mysql.com/doc/refman/5.5/en/events.html.
As Barmar has noted you should add a cronjob for this task. You can write a simple php script and then add it to your crontab with something like:
1 0 * * * php -f /path/to/file/clean.php
This means that the php file will be executed every day at midnight.
Just a few notes:
the file should not be in your web folder
you might want to do some tests and report errors by email(such as unable to connect to db)
If you build more of thees you should keep a list of them somewhere in case you switch servers(or the server dies)
if you use a config file(ex:to store your db connection details), you should make sure that it is accessible by the user that the cronjob works with.
Most hosting platforms allow for crontab editing and run them with the same user they run the web server so it should not be a problem.
There is really no other good solution to this then creating cron job. This is of course if you don't check the time stamp every time you get the data from the database.You can then delete it if it is bigger then the expiry data (DELETE FROM my_table WHERE timestamp>[Expiry Timestamp] ). This is of course risky, since you will have to include the timestamp every time you try a count, and risk storing everything forever if no expired resource is ever requested from the database.
i'm testing an app hosted at: app.promls.net, but there is some mistake on the script execution, on localhost
takes only -> timer: 0.12875008583069 seconds. .
in the execution when is just plain text that are created via php.
and when content is created dinamically and cames from mysql database:
timer: 0.44203495979309 seconds. /timer: 0.65762710571289 seconds. / timer: 0.48272085189819 seconds.
the times are diferent on the server. takes like 8 seconds on execution.
does anyone could give me a recomendation of how test and optimize my php execution.
i was optimizing the mysql database, cause some querys returns a tons of rows for a simple search, using describe and explain.
but know i have finished, and i would like to explore some new options for php execution.
i know that adding compression to html helps, but it only help on time of trasportation between server and final host when returns an html response. know i want to optimize php execution and if there are some tricks on mysql that could be implemented to help me improve the time response better.
note: i have thinking in use the hiphop for php and memcache or cassandra. but i guess those thinks are not re result for problem, cause i have no activities( means user actions) and no much information on my app.
thanks in advance i'm available for any comments or suggestions.
with such a big difference on execution we would need details on the host configuration (shared? dedicated?).
Is mysql skipping DNS test? if not try using skip-name-resolve setting in my.cnf, or use IP and not DNS in the PRIVILEGE query/user table, the only time I have seen such latency it was because of DNS timeout in the connection between MySQL and PHP.
First, off, try doing the following to your MYSQL DB:
Run "OPTIMIZE TABLE mytable" on all of your tables
Run "ANALYZE TABLE mytable" on all of your tables.
Add indexes to the table fields that you are using
Be sure to substitute each table name for "mytable" in the above statements.
See if doing the first two makes a difference, then add the indexes.