I have following Scenario:
PHP(Server, Writer) ----> MySQL Database <------ PHP(Client, Reader/ Writer);
PHPS = PHP Server
PHPC = PHP Client
How it works?
PHPS writes data to temporary database tables (queue_*).
PHPC is triggered by a 1 hour cron.
PHPC starts, connects to database and caches all records locally (how? no idea, local mysql db? sqlite?)
PHPC executes tasks defined in those records one by one
if the task is successful it removes it from databases
if its unsuccessful it adds that record in database under a reports table.
How do i implement this such that
No half written records from PHPS get to PHPC.
PHPC can cache all records locally after one query to process them.
Any other ideas that you might have and share are highly appreciated.
MySQL's default locking will ensure that no "half-written" rows are fetched. As far as "caching locally", it seems like all that means in your scenario is reading them out of the database into a local PHP data structure like an array.
you can see about MySQL locking here: Locking in MySQL. Remeber unlock table after finishing write data.
Related
A CRM is hitting my server via web hooks at-least 1000 times and I cannot process all the requests at once.
So I am thinking about saving it (in Mysql or csv file) and then process 1 record at a time.
which method is faster if there are approx 100 000 records and I have to process one record at a time.
Different methods are available to perform such operation:
You can store data in MySQL and write a PHP script which will fetch request from MySQL database and process one by one. That script you can run automatically using crontab or scheduler after specific interval.
You can implement custom queues functionality using PHP + MySQL
It sounds like you need the following:
1) An incoming queue table where all the new rows get inserted without processing. An appropriately configured InnoDB table should be able to handle 1000 INSERTs/second unless you are running on a Raspberry Pi or something similarly underspecified. You should probably have this partitioned so that instead of deleting the records after processing, you can drop partitions instead (ALTER TABLE ... DROP PARTITION is much, much cheaper than a large DELETE operation).
2) A scheduled event that processes the data in the background, possibly in batches, and cleans up the original queue table.
As you definitely know, CSV won't let you create indexes for fast searching. Setting indexes to your table columns speeds up the searching to a really great great degree, and you cannot ignore this fact.
If you need all your data from a single table (for instance, app config), CSV is faster, otherwise not. Hence, For simple inserting and table-scan (non-index based) searches, CSV is faster. Also, Consider that updating or deleting from a CSV is nontrivial. If you use CSV, you need to be really careful to handle multiple threads / processes correctly, otherwise you'll get bad data or corrupt your file.
Mysql offers a lot of capabilities such as SQL queries, transactions, data manipulation or concurrent access here, yet CSV is certainly not for these things. Mysql, as mentioned by Simone Rossaini, is also safe. You may not overlook this fact as well.
SUMMARY
If you are going for simple inserting and table-scan (non-index based) searches, CSV is definitely faster. Yet, it has many shortcomings when you compare it with the countless capabilities of MySql.
Maybe this is an obvious question, but it's just something I'm unsure of. If I have two standalone PHP applications running on one LAMP server, and the two PHP applications share the same MySQL database, do I need to worry about data integrity during concurrent database transactions, or is this something that MySQL just takes care of "natively"?
What happens if the two PHP applications both try to update the same record at the same time? What happens if they try to update the same table at the same time? What happens if they both try to read data from the database at the same time? Or if one application tries to read a record at the same time as the other application is updating that record?
What happens if the two PHP applications both try to update the same record at the same time?
What happens if they try to update the same table at the same time?
What happens if they both try to read data from the database at the same time?
Or if one application tries to read a record at the same time as the other application is updating that record?
This depend from several factor ..
the db engine you are using
the locking policy / transaction you have setted for you envirement .. or for you query
https://dev.mysql.com/doc/refman/8.0/en/innodb-locking-reads.html
https://dev.mysql.com/doc/refman/8.0/en/innodb-locks-set.html
the code you are using .. you could use a select for update for lock only the rows you want modify
https://dev.mysql.com/doc/refman/8.0/en/update.html
and how you manage transaction
https://dev.mysql.com/doc/refman/8.0/en/commit.html
this is just a brief suggestion
We have an iOS app which must download a large amount of user data from a remote server (in JSON format) and then insert this data into the local SQLite database. Because there is so much data, the insertion process takes more than 5 mins to complete, which is unacceptable. The process must be less than 30 seconds.
We have identified a potential solution: get the remote server to store the user's data in an SQLite database (on the remote machine). This database is compressed and then downloaded by the app. Therefore, the app will not have to conduct any data insertion, making the process much faster.
Our remote server is running PHP/MySQL.
My question:
What is the fastest and most efficient way to create the SQLite database on the remote server?
Is it possible to output a MySQL query directly into an SQLite table?
Is it possible to create a temporary MySQL database and then convert it to SQLite format?
Or do we have to take the MySQL query output and insert each record into the SQLite database?
Any suggestions would be greatly appreciated.
I think it's better to have a look at why the insert process is taking 5 minutes.
If you don't do it properly in SQLite, every insert statement will be executed in a separate transaction. This is known to be very slow. It's much better to do all the inserts in one single SQLite transaction. That should make the insert process really fast, even if you are talking about a lot of records.
In pseudo code, you will need to the following:
SQLite.exec('begin transaction');
for (item in dataToInsert) {
SQLite.exec('insert into table values ( ... )');
}
SQLite.exec('end transaction');
The same applies by the way if you want to create the SQLite database from PHP.
You can read a lot about this here: Improve INSERT-per-second performance of SQLite?
I have a problem with a project I am currently working on, built in PHP & MySQL. The project itself is similar to an online bidding system. Users bid on a project, and they get a chance to win if they follow their bid by clicking and cliking again.
The problem is this: if 5 users for example, enter the game at the same time, I get a 8-10 seconds delay in the database - I update the database using the UNIX_TIMESTAMP(CURRENT_TIMESTAMP), which makes the whole system of the bids useless.
I want to mention too that the project is very database intensive (around 30-40 queries per page) and I was thinking maybe the queries get delayed, but I'm not sure if that's happening. If that's the case though, any suggestions how to avoid this type of problem?
Hope I've been at least clear with this issue. It's the first time it happened to me and I would appreciate your help!
You can decide on
Optimizing or minimizing required queries.
You can cache queries do not need to update on each visit.
You can use Summery tables
Update the queries only on changes.
You have to do this cleverly. You can follow this MySQLPerformanceBlog
I'm not clearly on what you're doing, but let me elaborate on what you said. If you're using UNIX_TIMESTAMP(CURRENT_TIMESTAMP()) in your MySQL query you have a serious problem.
The problem with your approach is that you are using MySQL functions to supply the timestamp record that will be stored in the database. This is an issue, because then you have to wait on MySQL to parse and execute your query before that timestamp is ever generated (and some MySQL engines like MyISAM use table-level locking). Other engines (like InnoDB) have slower writes due to row-level locking granularity. This means the time stored in the row will not necessarily reflect the time the request was generated to insert said row. Additionally, it can also mean that the time you're reading from the database is not necessarily the most current record (assuming you are updating records after they were inserted into the table).
What you need is for the PHP request that generates the SQL query to provide the TIMESTAMP directly in the SQL query. This means the timestamp reflects the time the request is received by PHP and not necessarily the time that the row is inserted/updated into the database.
You also have to be clear about which MySQL engine you're table is using. For example, engines like InnoDB use MVCC (Multi-Version Concurrency Control). This means while a row is being read it can be written to at the same time. If this happens the database engine uses something called a page table to store the existing value that will be read by the client while the new value is being updated. That way you have guaranteed row-level locking with faster and more stable reads, but potentially slower writes.
Is there a Mysql statement which provides full details of any other open connection or user? Or, an equally detailed status report on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose.
What I'm trying to do: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here.
Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.
Edit: "Transaction" was the wrong word - a language collision. I'm not actually using Mysql transactions. What I meant was currently pending tasks, queries, and requests.
You can issue SHOW FULL PROCESSLIST; to show the active connections.
As for the rest, mysql doesn't know how many inserts are left, and how long they'll take. (And if you're using MyISAM tables, they dont support transactions). The server have no way of knowing whether your PHP scripts intend to send 10 more inserts, or 10000 - and if you're doing something like insert into xxx select ... from ... - mysql doesn't track/expose info on how much/many is done/is left .
You're better off handling this yourself via other tables where you update/insert data about when you started aggregating data, track the state,when it finished etc.
If the transactions are being performed on InnoDB tables, you can get full transaction details with SHOW INNODB STATUS. It's a huge blob of output, but part of it is transactions/lock status for each process/connection.