I need to update and read data from one table at the same time with different PHP scripts.
I create $sess as script session identifier (works around 20 scripts at same time) and I set in the table row the session identifier for 100 rows. After, it will SELECT rows reserved for this script by session identifier. In the while loop the script will do some work with the data and update reserved rows.
But the scripts don't work at same time, the first script works fine, but others not do first query while the first script is running. I see it in my database management app.
$sess = intval(str_replace(".", "", microtime(TRUE)));
sql_query("UPDATE locations SET sess='$sess' WHERE sess='0' LIMIT 100");
$r = sql_query("SELECT * FROM locations WHERE sess='$sess'");
while ($q = sql_row($r))
{
Create table syntax
CREATE TABLE `locations` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`country_id` int(11) unsigned DEFAULT NULL,
`area_id` int(11) unsigned DEFAULT NULL,
`timeZone` int(2) DEFAULT NULL,
`lat` double DEFAULT NULL,
`lon` double DEFAULT NULL,
`locationKey` int(11) unsigned DEFAULT NULL,
`cityId` int(11) unsigned DEFAULT NULL,
`sess` bigint(100) unsigned DEFAULT '0',
PRIMARY KEY (`id`),
KEY `lat` (`lat`),
KEY `lon` (`lon`),
KEY `country_id` (`country_id`),
KEY `cityId` (`cityId`),
KEY `area_id` (`area_id`),
KEY `sess` (`sess`)
) ENGINE=InnoDB AUTO_INCREMENT=3369269 DEFAULT CHARSET=utf8;
Are you inside a transaction (BEGIN...COMMIT)? If so, you have locked those 100 rows. Ditto for autocommit=0.
Instead, be sure the UPDATE (which is used to assign 100 items from your 'queue' to your process) is in a transaction by itself.
Then, assuming the other threads cannot call microtime in the same microsecond (a dubious assumption), the 100 items are safely assigned to you. Then you can start another transaction (if needed) and process them.
However, that 100-row SELECT will unnecessarily put a read lock on those rows.
So...
START TRANSACTION;
UPDATE ... LIMIT 100;
SELECT ...
COMMIT;
foreach ...
START TRANSACTION;
work on one item
COMMIT;
end-for
It is unclear whether the second START-COMMIT should be inside the for loop or outside. Inside will be slower, but may be 'correct', depending on what "work" you are doing and how long it could take. (You don't want to ROLLBACK 99 successful 'works' because the 100th took too long.)
Are you later doing DELETE ... WHERE sess = $sess? If you are mass-deleting like that, then do it in a separate transaction after the for loop.
Goal: segregate the queuing transactions from the application transactions.
Note that the segregation will make it easier to code for errors/deadlocks/etc. (You are checking, correct?)
Related
I am using phpmyadmin to manage my database. In one of my tables, when I click to see the last page (30 records) of 60000 records, I get this alert:
"This operation could take a long time. Proceed anyway?" which in fact does not happen and it shows up records in a very short time.
By the way, my table structure is as follow:
CREATE TABLE `documents` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) DEFAULT NULL,
`type` char(50) NOT NULL,
`comment` varchar(512) DEFAULT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `user_id` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;
so why am I getting this alert?
phpMyAdmin, as you know is a PHP based database management panel, it has to scan through the database to the last 30 rows, which means it is processing each record between 0 and 60000 to retrieve records 59970-60000.
Depending on how fast your web & sql server is, this can take a long time. The warning message is simply there to say that it could take a long time to get these records because of the size of your database.
I have a small(100-ish rows, 5 columns) table which is displayed in full for a control panel feature. When using IntelliJ to test development, it responds to the initial request, but never completes executing, and thus never serves any content. If I deploy the PHP files to my local web server, it serves the same content with no hesitation at all. Sometimes, when I load parts of the control panel that use no database access, it loads it just fine(albeit slow). I've upped the max memory allowed for requests in my cli/php.ini, and also increased the memory available to IntelliJ. My idea64.vmoptions is as follows:
-Xms128m
-Xmx3G
-XX:MaxPermSize=750m
-XX:ReservedCodeCacheSize=200m
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-Djsse.enableSNIExtension=false
-XX:+UseCodeCacheFlushing
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=50
-Dawt.useSystemAAFontSettings=lcd
If I dump the table, it loads the page again, so I assume the problem is related to how much memory IntelliJ allows php to use, but I'm quite stumped as to what to look for. The only special thing about the table, as far as I know, is that it uses a very large primary key column. Table structure is as follows:
CREATE TABLE IF NOT EXISTS `links` (
`url` VARCHAR(767) NOT NULL,
`link_group` INT(10) UNSIGNED NOT NULL,
`isActive` TINYINT(1) NOT NULL DEFAULT '1',
`hammer` TINYINT(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`url`),
KEY `group` (`link_group`)
)
ENGINE =InnoDB
DEFAULT CHARSET =utf8mb4,
ROW_FORMAT = COMPRESSED;
The row format is compressed to allow for said large primary keys. How should I proceed to if not solve it, find the cause?
I tried following Peter's suggestions, to no avail. I'm beginning to think this may just be IntelliJ not properly being able to serve PHP in my case. New table structure is as follows:
CREATE TABLE IF NOT EXISTS `links` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`url` varchar(767) NOT NULL,
`link_group` int(10) unsigned NOT NULL,
`isActive` tinyint(1) NOT NULL DEFAULT '1',
`hammer` tinyint(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `url` (`url`),
KEY `group` (`link_group`),
FULLTEXT KEY `url_2` (`url`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 ROW_FORMAT=COMPRESSED AUTO_INCREMENT=1 ;
Just to be clear, the MySQL performance doesn't seem bad. SELECT * FROM links executes in 0.0005 seconds.
You might want to recreate that table. Your table definition might be causing the unpredicatable behaviour.
Try using the TEXT data type for the url field. Also, using that as a PRIMARY key is not funny. Use an id field for the primary key and then, add a unique index to the url field (if so desired).
I'm currently trying to create a logparser for Call of Duty 4. The parser itself is in php and reads through every line of the logfile for a specific server, and writes all the statistics to a database with mysqli. The databases are already in place and I'm fairly certain (with my limited experience) that they're well-organized. However, I'm not sure in what way I should send the update/insert queries to the database, or rather, which way is optimal.
My databases are structured as follows
-- --------------------------------------------------------
--
-- Table structure for table `servers`
--
CREATE TABLE IF NOT EXISTS `servers` (
`server_id` tinyint(3) unsigned NOT NULL auto_increment,
`servernr` smallint(1) unsigned NOT NULL default '0',
`name` varchar(30) NOT NULL default '',
`gametype` varchar(8) NOT NULL default '',
PRIMARY KEY (`server_id`),
UNIQUE KEY (`servernr`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
-- --------------------------------------------------------
--
-- Table structure for table `players`
--
CREATE TABLE IF NOT EXISTS `players` (
`player_id` tinyint(3) unsigned NOT NULL auto_increment,
`guid` varchar(8) NOT NULL default '0',
`fixed_name` varchar(30) NOT NULL default '',
`hide` smallint(1) NOT NULL default '0',
PRIMARY KEY (`player_id`),
UNIQUE KEY (`guid`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
-- --------------------------------------------------------
--
-- Table structure for table `playerstats`
--
CREATE TABLE IF NOT EXISTS `playerstats` (
`pid` mediumint(9) unsigned NOT NULL auto_increment,
`guid` varchar(8) NOT NULL default '0',
`servernr` smallint(1) unsigned NOT NULL default '0',
`kills` mediumint(8) unsigned NOT NULL default '0',
`deaths` mediumint(8) unsigned NOT NULL default '0',
# And more stats...
PRIMARY KEY (`pid`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
In short, servers and players contain unique entities, and they are combined in playerstats (i.e. statistics for a player in a specific server). In addition to the stats, they are also given a player id (pid) for use in later databases. Similarly, the database contains the tables weapons (unique weapons) and weaponstats (statistics for a weapon in a server), attachments and attachstats, and maps and mapstats. Once I get all of this working, I would like to implement more relations between these stats (i.e. a player's stats for a specific weapon in a specific server, using pid and wid).
The PHP parser copies the log of each server (there are 6 atm) over http and then reads through them every 5 minutes (I'm not too sure on that yet). One can assume that during this parsing, every table has to be queried (either with UPDATE or INSERT) at least once (and probably alot more). Right now, I have a number of options on how to send queries (that I know of):
1: Use regular queries, i.e.
$statdb = new mysqli($sqlserver,$user,$pw, $db);
foreach( $playerlist as $guid => $data ){
$query = 'INSERT INTO `playerstats`
VALUES (NULL, '$guid', $servernr, $data[0], $data[1])';
$statdb->query($query);
}
2: Use multi query
$statdb = new mysqli($sqlserver,$user,$pw, $db);
foreach( $playerlist as $guid => $data ){
$query = "INSERT INTO `playerstats`
VALUES (NULL, '$guid', $servernr, $data[0], $data[1]);";
$totalquery .= $query;
}
$statdb->multi_query($totalquery);
3: Use prepared statements; I haven't actually tried this yet. It seems like a good idea, but then I have to make a prepared statement for every table (I think). Will that even be possible, and if so, will it be efficient?
4: As you might be able to see from the aforementioned code, I initially count all the statistics for each player,weapon,map, etc. into an array. Once the parser has read through the entire file, it sends a query with those accumulated stats to the mysql server. However, I have also seen (more often then not) in other logparsers, that queries are being sent whenever a new line of the logfile has been parsed, so something like:
UPDATE playerstats
SET kills = kills+1
WHERE guid = $guid
It doesn't seem very efficient to me, but then again I'm just starting out with both php and sql so what do I know :>
So, in short; what would be the most efficient way to query the database, considering that the logparser reads through every line one by one? Of course, any other advice or suggestion is always welcome.
.5. create a single multi-insert query using mysql's support for the queries like
INSERT INTO table (fields) VALUES(data),VALUES(data)...
It seems most efficient of them all, including prepared statements
The most efficient way to me would be to scan the server every so often, once every 5 minutes or so, then scan the list of stats into an array (e.g. in 5 mins 38 people have been on the server so you have an array of 38 IDs, each with the accumulated stats changes of those 38 IDs that need to be updated in the server). Run one query to check to see if a user has an existing ID in stats, and then 2 more queries, one to create new users (multi query insert) and one to update users (single query with CASE update). That limits you to 3 queries every 5 minutes.
I have a table that its structure is as like as follow:
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`ttype` int(1) DEFAULT '19',
`title` mediumtext,
`tcode` char(2) DEFAULT NULL,
`tdate` int(11) DEFAULT NULL,
`visit` int(11) DEFAULT '0',
PRIMARY KEY (`id`),
KEY `tcode` (`tcode`),
KEY `ttype` (`ttype`),
KEY `tdate` (`tdate`)
ENGINE=MyISAM
I have two query on x.php same as:
SELECT * FROM table_name WHERE id='10' LIMIT 1
UPDATE table_name SET visit=visit+1 WHERE id='10' LIMIT 1
My first problem is that whether updating 'visit' in table cause reindexing and decreasing performance or not? Note to this point that 'visit' is not key.
Second method may be creating new table that contain 'visit' like as follow:
'newid' int(10) unsigned NOT NULL ,
`visit` int(11) DEFAULT '0',
PRIMARY KEY (`newid`),
ENGINE=MyISAM
So selecting by
SELECT w.*,q.visit FROM table_name w LEFT JOIN table_name2 q
ON (w.id=q.newid) WHERE w.id='10' LIMIT 1
UPDATE table_name2 SET visit=visit+1 WHERE newid='10' LIMIT 1
Is second method prefered rescpect to first method? Which one would have better performance and would be quick?
Note: all sql queries would be run by PHP (mysql_query command). Also I need first table indexes for other queries on other pages.
I'd say your first method is the best, and simplest. Updating visit will be very fast and no updating of indexes needs to be performed.
I'd prefer the first, and have used that for similar things in the past with no problems. You can remove the limit clause since id is your primary key you will never have more than 1 result, although the query optimizer probably does this for you.
There was a question someone asked earlier to which I responded with a solution you may want to consider as well. When you do 'count' columns you lose the ability to mine the data later. With a transaction table not only can you get 'views' counts, but you can also query for date ranges etc. Sure you will carry the weight of storing potentially hundreds of thousands of rows, but the table is narrow and indices numeric.
I cannot see a solution on the database side... Perhaps you can do it in PHP: If the user has a PHP session, you could, for example, only update the visitor count each 10th time, like:
<?php
session_start();
$_SESSION['count']+=1;
if ($_SESSION['count'] > 10) {
do_the_function_that_updates_the_count_plus_10();
$_SESSION['count'] = 0;
}
Of course you loose some counts, this way, but perhaps this is not that important?
I want to do the following:
Select multiple rows on an INNER JOIN between two tables.
Using the primary keys of the returned rows, either:
Update those rows, or
Insert rows into a different table with the returned primary key as a foreign key.
In PHP, echo the results of step #1 out, ideally with results of #2 included (to be consumed by a client).
I've written the join, but not much else. I tried using a user-defined variable to store the primary keys from step #1 to use in step #2, but as I understand it user-defined variables are single-valued, and my SELECT can return multiple rows. Is there a way to do this in a single MySQL transaction? If not, is there a way to do this with some modicum of efficiency?
Update: Here are the schemas of the tables I'm concerned with (names changed, 'natch):
CREATE TABLE IF NOT EXISTS `widgets` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`author` varchar(75) COLLATE utf8_unicode_ci NOT NULL,
`text` varchar(500) COLLATE utf8_unicode_ci NOT NULL,
`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated` timestamp
NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
);
CREATE TABLE IF NOT EXISTS `downloads` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`widget_id` int(11) unsigned NOT NULL,
`lat` float NOT NULL,
`lon` float NOT NULL,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
);
I'm currently doing a join to get all widgets paired with their downloads. Assuming $author and $batchSize are php vars:
SELECT w.id, w.author, w.text, w.created, d.lat, d.lon, d.date
FROM widgets AS w
INNER JOIN downloads AS d
ON w.id = d.widget_id
WHERE w.author NOT LIKE '$author'
ORDER BY w.updated ASC
LIMIT $batchSize;
Ideally my query would get a bunch of widgets, update their updated field OR insert a new download referencing that widget (I'd love to see answers for both approaches, haven't decided on one yet), and then allow the joined widgets and downloads to be echoed. Bonus points if the new inserted download or updated widgets are included in the echo.
Since you asked if you can do this in a single Mysql transaction I'll mention cursors. Cursors will allow you to do a select and loop through each row and do the insert or anything else you want all within the db. So you could create a stored procedure that does all the logic behind the scenes that you can call via php.
Based on your update I wanted to mention that you can have the stored procedure return the new recordset or an I'd, anything you want. For more info on creating stored procedures that return a recordset with php you can check out this post: http://www.joeyrivera.com/2009/using-mysql-stored-procedure-inout-and-recordset-w-php/