I got a very simple select statement for instance:
SELECT id, some_more_fields_that_do_not_matter
FROM account
WHERE status = '0'
LIMIT 2
Keep in mind that the above returns the following id's: 1,2
The next thing I do is loop these rows in a for each and update some records:
UPDATE account
SET connection = $_SESSION['account_id'],
status = '1'
WHERE id = $row_id
Now the rows in the table with id's 1,2 have the status '1' (which I do check to make sure the rows where correctly updated). If it failed to do such, I will undo everything. As soon as everything is OK I have a counter at the first place which is 2 in this case, so 2 rows should have been updated, and I check this with a simple COUNT(*).
This information will also be emailed with for instance the following data (which means everything was updated correctly):
- Time of update: 2013-09-30 16:30:02
- Total rows to be updated (selected) = 2
- Total rows successfully updated after completing queries = 2
The following id's should have been updated
(these where returned by the SELECT statement):
1,2
So far so good. Now comes the weird part.
The very next query made by a other user will however sometimes return for instance the id's 1,2 (but that's impossible because these should never be returned by the SELECT statement because they do not contain the status '0' anymore. So what happens is the following:
I now receive an email with for instance:
- Time of update: 2013-09-30 16:30:39
- Total rows to be updated (selected) = 10
- Total rows successfully updated after completing queries = 8
The following id's should have been updated
(these where returned by the SELECT statement):
1,2,3,4,5,6,7,8,9,10
Now it is really strange the 1 and 2 are selected by the update. In most cases it goes good, but very rarely it just doesn't and returns some same ID's which are already updated with a status '1'.
Notice the time between these updates. It's not even the same time. I first thought it would be something like that these queries would be executed at the exact same time (which is impossible right?). Or is this possible? Or could it somehow be that the query has been cached, and should I edit some settings at my mysql.conf file?
I have never had this problem, I tried every way of updating but it seems to keep happening. Is it maybe possible at some way to combine these 2 queries in one huge update query? All the data is just the same and doesn't do anything strange. I hope someone has a clue what this could cause the problem and why this is randomly (rarely) happening.
EDIT:
I updated the script, added the microtime to check how long the SELECT, UPDATE and CHECK-(SELECT) together takes.
First member (with ID 20468) makes a call at: 2013-10-01 08:30:10
2/2 rows have been updated correctly of the following 2 ID's:
33412,33395
Queries took together 0.878005027771 seconds
Second member (with ID 10123) makes a call at: 2013-10-01 08:30:14
20/22 rows have been updated correctly of the following 22 ID's:
33392,33412,33395,33396,41489,13011,12555,27971,22811 and some more but not important
Queries took together 3.3440849781036 seconds
Now you see that the 33412 and 33395 are again returned by the SELECT.
Third member (with ID 20951) makes a call at: 2013-10-01 08:30:16
9/9 rows have been updated correctly of the following 9 ID's:
33392,33412,33395,33396,41489,13011,12555,27971,22811
Queries took together Didn't return anything which concerns me a
little bit too
Since we do not know how long the last queries took, we only know that the first and second should work correctly without problems because if you look there are 4 seconds between them. and the execution time was 3.34 seconds. Besides that the first one started at 2013-10-01 08:30:17 because the time that is logged for the call (when emailing it) is at the end of the script. The check to see how long the queries took are from the start of the first query and the stop directly after the last query and this is before I send the email (of course).
Could it be something in my.cnf file that mysql is doing this weird?
Still I don't understand why id didn't return any execution time for the last (third) call.
A solution for this would be to Queue these actions by first saving them into a table and executing them one at a time by a cron job. But it's not really what I want, it should be instant when a member makes the call. Thanks for the help so far.
Anyway here is my my.cnf in case someone has suggestions for me (Server has 16GB RAM installed):
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
max_connections = 20
query_cache_type = 1
query_cache_limit = 1M
query_cache_size = 4M
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
innodb_buffer_pool_size = 333M
join_buffer_size = 128K
tmp_table_size = 16M
max_heap_table_size = 16M
table_cache = 200
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
#no-auto-rehash # faster start of mysql but no tab completition
[isamchk]
key_buffer = 16M
!includedir /etc/mysql/conf.d/
EDIT 2:
$recycle_available = $this->Account->Membership->query("
SELECT Account.id,
(SELECT COUNT(last_clicks.id) FROM last_clicks WHERE last_clicks.account = Account.id AND last_clicks.roulette = '0' AND last_clicks.date BETWEEN '".$seven_days_back."' AND '".$tomorrow."') AS total,
(SELECT COUNT(last_clicks.id)/7 FROM last_clicks WHERE last_clicks.account = Account.id AND last_clicks.roulette = '0' AND last_clicks.date BETWEEN '".$seven_days_back."' AND '".$tomorrow."') AS avg
FROM membership AS Membership
INNER JOIN account AS Account ON Account.id = Membership.account
WHERE Account.membership = '0' AND Account.referrer = '0' AND Membership.membership = '1'
HAVING avg > 0.9
ORDER BY total DESC");
foreach($referrals as $key => $value)
{
$this->Account->query("UPDATE account SET referrer='".$account_id."', since='".$since_date."', expires='".$value['Account']['expires']."', marker='0', kind='1', auction_amount='".$value['Account']['auction_amount']."' WHERE id='".$recycle_available[$key]['Account']['id']."'");
$new_referral_id[] = $recycle_available[$key]['Account']['id'];
$counter++;
}
$total_updated = $this->Account->find('count',array('conditions'=>array('Account.id'=>$new_referral_id, 'Account.referrer'=>$account_id, 'Account.kind'=>1)));
You indicate in the comments you are using transactions. However, I can't see any $dataSource->begin(); nor $dataSource->commit(); in the PHP snippet you posted. Therefore, you must be doing $dataSource->begin(); prior to the snippet and $dataSource->commit(); or $dataSource->rollback(); after the snippet.
The problem is that you're updating and then trying to select prior to committing. No implicit commit is created, so you don't see updated data: http://dev.mysql.com/doc/refman/5.7/en/implicit-commit.html
It is hard to tell the reason of this strange behaviour without having my hands on the DB. But much better way to do what you are doing would be to do it all in one query:
UPDATE account
SET connection = $_SESSION['account_id'],
status = '1'
WHERE status = '0'
Most likely this will solve the problem you are facing.
I suggest using this syntax:
$query="UPDATE account SET connection = {$_SESSION['account_id']}, status = 1 WHERE id=$row_id;";
The compiler throws an error when you use an integer with ''. And remember to use {} when you have an array.
Related
I have a PHP script that is launching an SQL query, which takes a few seconds and returns about 15K+ results, after making a complex calculation based on data found on a DB table on my server, named table.
After getting the results, it is supposed to Insert (using a Replace) these records onto another DB table on my server.
The script used to work flawlessly when table was a simple core table. I later modified the SQL query to use data from a VIEW table, we can call it view_table, which is based on table but has an extra column that is calculated on the fly.
This had made the script to start crashing my whole SQL server, every once in awhile, throwing this error:
PHP Warning: mysqli::query(): MySQL server has gone away in
/home/user/script.php on line 109
Below is line 109:
function getRecordsFromDB(){
logMemoryUtilizationDetails();
file_put_contents(LOG_DIRECTORY.'service.log', date("d/m:H:i").':'."getRecordsFromDB::Entry".PHP_EOL, FILE_APPEND);
global $sourceConn;
$selectQuery = file_get_contents(__DIR__.'/my-query.sql');
$items = array();
$result = $sourceConn->query($selectQuery); // LINE 109
if ($result){
if ($result->num_rows > 0) {
while($row = $result->fetch_assoc()) {
$item = new Item();
$item->id = $row['id'];
$item->itemId = $row['itemid'];
I tried to log create to see how much memory does script use on start and exit, and when it is successful, it is using just about 37MB of RAM at its peak.
My server has 6GB of RAM on 4 cores.
There is no other script running on my server that is causing SQL server crashes like this, so I'm sure that this script is causing the crash.
Here is the MY.CNF of my server:
[mysqld]
# disable mysql strict mode
sql_mode=""
log-error=/var/lib/mysql/host.myhost.com.err
performance_schema=1
query_cache_type=0
query_cache_size=0
query_cache_limit=0
key_buffer_size=16M
max_connections=200
max_tmp_tables=1
table_open_cache=2000
local-infile=0
thread_cache_size=4
innodb_file_per_table=1
default-storage-engine=MyISAM
innodb_use_native_aio=0
max_allowed_packet=1024M
innodb_buffer_pool_size=800M
open_files_limit=10000
#wait_timeout=500
tmp_table_size=256M
max_heap_table_size=256M
innodb_buffer_pool_instances=1
#general_log=on
#general_log_file = /var/lib/mysql/all-queries.log
slow-query-log=0
slow-query-log-file=/var/lib/mysql/slow_queries.log
long_query_time=1.0
#log_queries_not_using_indexes=1
And this is from my PHP.INI (for PHP 7.2 which I'm using):
max_execution_time = 240
max_input_time = 60
max_input_vars = 1000
memory_limit = 512M
[MySQLi]
mysqli.max_persistent = -1
;mysqli.allow_local_infile = On
mysqli.allow_persistent = On
mysqli.max_links = -1
mysqli.cache_size = 2000
mysqli.default_port = 3306
mysqli.default_socket =
mysqli.default_host =
mysqli.default_user =
mysqli.default_pw =
mysqli.reconnect = Off
I don't see any mysql.connect_timeout setting in those files.
I have many other scripts and they all work fine, so I wouldn't want to change something globally as I'm afraid it can cause other issues on my server.
Looks like a timed-out or failed query. Please paste the sql query you are using. You can also try and see yourself where the query might cause this by pasting the query in your mysql IDE (Navicat is my favorite) and prepend it with 'explain extended' (no quotes). So your query would look like 'explain extended select ...(all 300 lines)'
Look for keys higher than 4, no primary keys and rows queried with really high numbers for starters.
Also, it looks like instead of a view you may want to consider creating a stored procedure in which you can select everything into a temporary table and then do the on-the-fly calculated value in the next query. Of course, you need to configure my.cnf to recognize temporary table so it can destroy it once the session is complete. Also, if you have any replication or cluster of servers, make sure to stop binlog before creating the temporary table and then start it once your queries are completed and your session is about to close.
If you like, please paste your my.cnf (mysql config file) to make sure your config is optimal for the large query.
Also, for trouble-shooting purposes, you may want to temporarily increase the max execution time in php.ini.
I'm having problems with timeout using the DataSax php driver for Cassandra.
Whenever I execute a certain command it always throws this exception after 10s:
PHP Fatal error: Uncaught exception 'Cassandra\Exception\TimeoutException' with message 'Request timed out'
My php code is like this:
$cluster = Cassandra::cluster()->build();
$session = $cluster->connect("my_base");
$statement = new Cassandra\SimpleStatement("SELECT COUNT(*) as c FROM my_table WHERE my_colunm = 1 AND my_colunm2 >= '2015-01-01' ALLOW FILTERING")
$result = $session->execute($statement);
$row = $result->first();
My settings in cassandra.yaml is:
# How long the coordinator should wait for read operations to complete
read_request_timeout_in_ms: 500000
# How long the coordinator should wait for seq or index scans to complete
range_request_timeout_in_ms: 1000000
# How long the coordinator should wait for writes to complete
write_request_timeout_in_ms: 2000
# How long the coordinator should wait for counter writes to complete
counter_write_request_timeout_in_ms: 50000
# How long a coordinator should continue to retry a CAS operation
# that contends with other proposals for the same row
cas_contention_timeout_in_ms: 50000
# How long the coordinator should wait for truncates to complete
# (This can be much longer, because unless auto_snapshot is disabled
# we need to flush first so we can snapshot before removing the data.)
truncate_request_timeout_in_ms: 60000
# The default timeout for other, miscellaneous operations
request_timeout_in_ms: 1000000
I've already tried this:
$result = $session->execute($statement,new Cassandra\ExecutionOptions([
'timeout' => 120
])
);
and this:
$cluster = Cassandra::cluster()->withDefaultTimeout(120)->build();
and this:
set_time_limit(0)
And it always throws the TimeoutException after 10s..
I'm using Cassandra 3.6
Any idea?
Using withConnectTimeout (instead of, or together with withDefaultTimeout) might help avoid a TimeoutException (it did in my case)
$cluster = Cassandra::cluster()->withConnectTimeout(60)->build();
However, if you need such a long timeout, then there is probably an underlying problem that will need solving eventually.
Two things you are doing wrong.
ALLOW FILTERING : Be careful. Executing this query with allow filtering might not be a good idea as it can use a lot of your computing resources. Don't use allow filtering in production Read the
datastax doc about using ALLOW FILTERING
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/select_r.html?hl=allow,filter
count() : It also a terrible idea to use count(). count() actually pages through all the data. So a select count() from userdetails without a limit would be expected to timeout with that many rows. Some details here: http://planetcassandra.org/blog/counting-key-in-cassandra/
How to Fix it ?
Instead of using ALLOW FILTERING, You should create index table of
your clustering column if you need query without partition key.
Instead of using count(*) you should create a counter table
I was doing bulk inserts in the RealTime Index using PHP and by Disabling AUTOCOMIT ,
e.g.
// sphinx connection
$sphinxql = mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','');
//do some other time consuming work
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
//do 50k updates or inserts
// Commit transaction
mysqli_commit($sphinxql);
and kept the script running overnight, in the morning i saw
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate
212334 bytes) in
so when i checked the nohup.out file closely , i noticed , these lines ,
PHP Warning: mysqli_query(): MySQL server has gone away in /home/script.php on line 502
Warning: mysqli_query(): MySQL server has gone away in /home/script.php on line 502
memory usage before these lines was normal , but memory usage after these lines started to increase, and it hit the php mem_limit and gave PHP Fatal error and died.
in script.php , line 502 is
mysqli_query($sphinxql,$update_query_sphinx);
so my guess is, sphinx server closed/died after few hours/ minutes of inactivity.
i have tried setting in sphinx.conf
client_timeout = 3600
Restarted the searchd by
systemctl restart searchd
and still i am facing same issue.
So how can i not make sphinx server die on me ,when no activity is present for longer time ?
more info added -
i am getting data from mysql in 50k chunks at a time and doing while loop to fetch each row and update it in sphinx RT index. like this
//6mil rows update in mysql, so it takes around 18-20 minutes to complete this then comes this following part.
$subset_count = 50000 ;
$total_count_query = "SELECT COUNT(*) as total_count FROM content WHERE enabled = '1'" ;
$total_count = mysqli_query ($conn,$total_count_query);
$total_count = mysqli_fetch_assoc($total_count);
$total_count = $total_count['total_count'];
$current_count = 0;
while ($current_count <= $total_count){
$get_mysql_data_query = "SELECT record_num, views , comments, votes FROM content WHERE enabled = 1 ORDER BY record_num ASC LIMIT $current_count , $subset_count ";
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
if ($result = mysqli_query($conn, $get_mysql_data_query)) {
/* fetch associative array */
while ($row = mysqli_fetch_assoc($result)) {
//sphinx escape whole array
$escaped_sphinx = mysqli_real_escape_array($sphinxql,$row);
//update data in sphinx index
$update_query_sphinx = "UPDATE $sphinx_index
SET
views = ".$escaped_sphinx['views']." ,
comments = ".$escaped_sphinx['comments']." ,
votes = ".$escaped_sphinx['votes']."
WHERE
id = ".$escaped_sphinx['record_num']." ";
mysqli_query ($sphinxql,$update_query_sphinx);
}
/* free result set */
mysqli_free_result($result);
}
// Commit transaction
mysqli_commit($sphinxql);
$current_count = $current_count + $subset_count ;
}
So there are a couple of issues here, both related to running big processes.
MySQL server has gone away - This usually means that MySQL has timed out, but it could also mean that the MySQL process crashed due to running out of memory. In short, it means that MySQL has stopped responding, and didn't tell the client why (i.e. no direct query error). Seeing as you said that you're running 50k updates in a single transaction, it's likely that MySQL just ran out of memory.
Allowed memory size of 134217728 bytes exhausted - means that PHP ran out of memory. This also leads credence to the idea that MySQL ran out of memory.
So what to do about this?
The initial stop-gap solution is to increase memory limits for PHP and MySQL. That's not really solving the root cause, and depending on t he amount of control you have (and knowledge you have) of your deployment stack, it may not be possible.
As a few people mentioned, batching the process may help. It's hard to say the best way to do this without knowing the actual problem that you're working on solving. If you can calculate, say, 10000 or 20000 records instad of 50000 in a batch that may solve your problems. If that's going to take too long in a single process, you could also look into using a message queue (RabbitMQ is a good one that I've used on a number of projects), so that you can run multiple processes at the same time processing smaller batches.
If you're doing something that requires knowledge of all 6 million+ records to perform the calculation, you could potentially split the process up into a number of smaller steps, cache the work done "to date" (as such), and then pick up the next step in the next process. How to do this cleanly is difficult (again, something like RabbitMQ could simplify that by firing an event when each process is finished, so that the next one can start up).
So, in short, there are your best two options:
Throw more resources/memory at the problem everywhere that you can
Break the problem down into smaller, self contained chunks.
You need to reconnect or restart the DB session just before mysqli_begin_transaction($sphinxql)
something like this.
<?php
//reconnect to spinx if it is disconnected due to timeout or whatever , or force reconnect
function sphinxReconnect($force = false) {
global $sphinxql_host;
global $sphinxql_port;
global $sphinxql;
if($force){
mysqli_close($sphinxql);
$sphinxql = #mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','') or die('ERROR');
}else{
if(!mysqli_ping($sphinxql)){
mysqli_close($sphinxql);
$sphinxql = #mysqli_connect($sphinxql_host.':'.$sphinxql_port,'','') or die('ERROR');
}
}
}
//10mil+ rows update in mysql, so it takes around 18-20 minutes to complete this then comes this following part.
//reconnect to sphinx
sphinxReconnect(true);
//sphinx start transaction
mysqli_begin_transaction($sphinxql);
//do your otherstuff
// Commit transaction
mysqli_commit($sphinxql);
I have a weird problem.
I'm running a query:
SELECT IMIE, NAZWISKO, PESEL2, ADD_DATE, CONVERT(varchar, ADD_DATE, 121) AS XDATA, ID_ZLECENIA_XXX, * FROM XXX_KONWERSJE_HISTORIA AS EKH1
INNER JOIN XXX_DANE_PACJENTA EDP1 ON EKH1.ID_ZLECENIA_XXX=EDP1.ORDER_ID_XXX
WHERE EKH1.ID_KONWERSJE = (
SELECT MIN(ID_KONWERSJE)
FROM XXX_KONWERSJE_HISTORIA AS EKH2
WHERE EKH1.ID_ZLECENIA_XXX = EKH2.ID_ZLECENIA_XXX
)
AND EDP1.RECNO = (
SELECT MAX(RECNO)
FROM XXX_DANE_PACJENTA EDP2
WHERE EDP2.ORDER_ID_XXX = EDP1.ORDER_ID_XXX
)
AND EKH1.ID_ZLECENIA_XXX LIKE '%140000393%'
AND ADD_DATE>'20140419' AND ADD_DATE<='20140621 23:59:59.999'
ORDER BY EKH1.ID_KONWERSJE, EKH1.ID_ZLECENIA_XXX DESC
And the query works ok if I use a date limit around 2 months (63 days - it gives me 1015 results). If I extend the date limit query simply fails (Query failed blabla).
This happens under windows 64 bit php (apache, Xamp).
When I run this query directly from MS SQL SERWER Management Studio everything works fine, no matter what date limit I choose.
What is going on? Is there a limit of some kind under apache/php? (There is no information like "query time excessed", only "query failed")
And the query works ok if I use a date limit around 2 months (63 days
- it gives me 1015 results). If I extend the date limit query simply fails (Query failed blabla). ...
What is going on? Is there a limit of
some kind under apache/php? (There is no information like "query time
excessed", only "query failed")
This could happen because selectivity of ADD_DATE>'20140419' AND ADD_DATE<='20140621 23:59:59.999' is medium/low (there are [too] many rows that satisfy this predicate) and SQL Server have to scan (yes, scan) XXX_KONWERSJE_HISTORIA to many times to check following predicate:
WHERE EKH1.ID_KONWERSJE = (
SELECT ...
FROM XXX_KONWERSJE_HISTORIA AS EKH2
WHERE EKH1.ID_ZLECENIA_XXX = EKH2.ID_ZLECENIA_XXX
)
How many times have to scan SQL Server XXX_KONWERSJE_HISTORIA table to verify this predicate ? You can look at the properties of Table Scan [XXX_KONWERSJE_HISTORIA] data access operator: 3917 times
What you can do for the beginning ? You should create the missing index (see that warning with green above the execution plan):
USE [OptimedMain]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[ERLAB_KONWERSJE_HISTORIA] ([ID_ZLECENIA_ERLAB])
INCLUDE ([ID_KONWERSJE])
GO
When I run this query directly from MS SQL SERWER Management Studio
everything works fine, no matter what date limit I choose.
SQL Server Management Studio has execution timeout set to 0 by default (no execution timeout).
Note: if this index will solve the problem then you should try (1) to create an index on ADD_DATE with all required (CREATE INDEX ... INCLUDE(...)) columns and (2) to create unique clustered indexes on these tables.
Try to set these php configurations in your php script via ini_set
ini_set('memory_limit', '512M');
ini_set('mssql.timeout', 60 * 20);
Not sure it will help you out.
I'm counting the right answers field of a table and saving that calculated value on another table. For this I'm using two queryes, first one is the count query, i retrieve the value using loadResult(). After that i'm updating another table with this value and the date/time. The problem is that in some cases the calculated value is not being saved, only the date/time.
queries look something like this:
$sql = 'SELECT count(answer)
FROM #_questionsTable
WHERE
answer = 1
AND
testId = '.$examId;
$db->setQuery($sql);
$rightAnsCount = $db->loadResult();
$sql = 'UPDATE #__testsTable
SET finish = "'.date('Y-m-d H:i:s').'", rightAns='.$rightAnsCount.'
WHERE testId = '.$examId;
$db->setQuery($sql);
$db->Query();
answer = 1 means that the question was answered ok.
I think that when the 2nd query is executed the first one has not finished yet, but everywhere i read says that it waits that the first query is finished to go to the 2nd, and i don't know how to make the 2nd query wait for the 1st one to end.
Any help will be appreciated. Thanks!
a PHP MySQL query is synchronous ie. it completes before returning - Joomla!'s database class doesn't implement any sort of asynchronous or call-back functionality.
While you are missing a ';' that wouldn't account for it working some of the time.
How is the rightAns column defined - eg. what happens when your $rightAnsCount is 0
Turn on Joomla!'s debug mode and check the SQL that's generated in out the profile section, it looks something like this
eg.
Profile Information
Application afterLoad: 0.002 seconds, 1.20 MB
Application afterInitialise: 0.078 seconds, 6.59 MB
Application afterRoute: 0.079 seconds, 6.70 MB
Application afterDispatch: 0.213 seconds, 7.87 MB
Application afterRender: 0.220 seconds, 8.07 MB
Memory Usage
8511696
8 queries logged.
SELECT *
FROM jos_session
WHERE session_id = '5cs53hoh2hqi9ccq69brditmm7'
DELETE
FROM jos_session
WHERE ( TIME < '1332089642' )
etc...
you may need to add a semicolon to the end of your sql queries
...testId = '.$examID.';';
ah, something cppl mentioned is the key I think. You may need to account for null values from your first query.
Changing this line:
$rightAnsCount = $db->loadResult();
To this might make the difference:
$rightAnsCount = ($db->loadResult()) ? $db->loadResult() : 0;
Basically setting to 0 if there is no result.
I am pretty sure you can do this in one query instead:
$sql = 'UPDATE #__testsTable
SET finish = NOW()
, rightAns = (
SELECT count(answer)
FROM #_questionsTable
WHERE
answer = 1
AND
testId = '.$examId.'
)
WHERE testId = '.$examId;
$db->setQuery($sql);
$db->Query();
You can also update all values in all rows in your table this way by slightly modifying your query, so you can do all rows in one go. Let me know if this is what you are trying to achieve and I will rewrite the example.