Maximum time execution CodeIgniter 3 issue - php

I got that the only solution to avoid the Maximum time execution CodeIgniter 3 issue is to increase the time execution from 30 to 300 for example.
I'm using CodeIgniter in a news website. I'm loading only 20 latest news in the news section page and I think that it's not a big number to make the server out of execution time. (Notice that the news table has more than 1400 news and the seen table has more than 150.000 logs).
I say that it's not logical that the user should wait for more than 50 seconds to get the respond and load the page.## Heading ##
Is there any useful way to load the page as fast as possible without "maximum time execution"?
My Code in the model:
public function get_section_news($id_section = 0, $length = 0, $id_sub_section = 0, $id_news_lessthan = 0) {
$arr = [] or array();
//
if (intval($id_section) > 0 and intval($length) > 0) {
//
$where = [] or array();
$where['sections.activity'] = 1;
$where['news.deleted'] = 0;
$where['news.id_section'] = $id_section;
$query = $this->db;
$query
->from("news")
->join("sections", "news.id_section = sections.id_section", "inner")
->order_by("news.id_news", "desc")
->limit($length);
//
if (intval($id_sub_section) > 0) {
$where['news.id_section_sub'] = $id_sub_section;
}
if ($id_news_lessthan > 0) {
$where['news.id_news <'] = $id_news_lessthan;
}
//
$get = $query->where($where)->get();
$num = $get->num_rows();
if ($num > 0) {
//
foreach ($get->result() as $key => $value) {
$arr['row'][] = $value;
}
}
$arr['is_there_more'] = ($length > $num and $num > 0) ? true : false;
}
return $arr;
}

This usually has nothing to do with the framework. You may run the following command on your mysql client and check if there are any sleeping queries on your database.
SHOW FULL PROCESSLIST
most likely you have sleeping queries since you are not emptying result set with
$get->free_result();
Another problem may be slow queries on this I recommend the following
1) make sure you are using the same database engine on all tables for this I recommend INNODB as some engines lock the whole table during a transaction which is undesirable You should have noticed this already when you ran show full processlist
2) Run your queries on a mysql client and observe how long they will take to execute. If they take too long it may be a result of unindexed tables. You may Explain your query to identify unindexed tables. You may follow these 1,2,3 tutorials on indexing your tables. Or you can do it easily with tools like navicat

Related

MySql 8.0 Read Speed

I'm in need of some expertise here, I have a massive SQL database that I use in conjunction with my Mobile app I program. I'm getting some very long times to fetch a result from the database, at times upwards of 20 to 25 seconds. I've managed to increase the speed and it is were it is now from 40 seconds to retrieve a result. I am hoping someone may have some insight on how I can speed up the query speed and return a result faster then 20 seconds.
Main table is 4 columns + 1 for the "id" column, the database contains 15,254,543 rows of data. Currently it is setup as an InnoDB, with 4 indexes, and it about 1.3GB for size.
My server is a GoDaddy VPS, 1 CPU, with 4 GB of Ram. The is dedicated and I do not share resources with anyone else, its only purpose beside a very basic website is the SQL database.
Just to note, the database record count is not going to get any larger, I really just need to figure out a better way to return a query faster then 20 seconds.
In more detail, the Android app connects to my website via a php document to query and return the results, I have a thought that there maybe a better way to go about this and this maybe were the pitfall is. An interesting note is that when I'm in PHP My Admin, I can do a search and get a result back in under 3 seconds, which also points me to the issue might be in my php document. Here is the php document below that I wrote to do the work.
<?php
require "conn.php";
$FSC = $_POST["FSC"];
$PART_NUMBER = $_POST["NIIN"];
$mysql_qry_1 = "select * from MyTable where PART_NUMBER like '$PART_NUMBER';";
$result_1 = mysqli_query($conn ,$mysql_qry_1);
if(mysqli_num_rows($result_1) > 0) {
$row = mysqli_fetch_assoc($result_1);
$PART_NUMBER = $row["PART_NUMBER"];
$FSC = $row["FSC"];
$NIIN = $row["NIIN"];
$ITEM_NAME = $row["ITEM_NAME"];
echo $ITEM_NAME, ",>" .$PART_NUMBER, ",>" .$FSC, "" .$NIIN;
//usage stats
$sql = "INSERT INTO USAGE_STATS (ITEM_NAME, FSC, NIIN, PART_NUMBER)
VALUES ('$ITEM_NAME', '$FSC', '$NIIN', '$PART_NUMBER')";
if ($conn->query($sql) === TRUE) {
$row = mysqli_insert_id($conn);
} else {
//do nothing
}
//
} else {
echo "NO RESULT CHECK TO ENSURE CORRECT PART NUMBER WAS ENTERED ,> | ,>0000000000000";
}
$mysql_qry_2 = "select * from MYTAB where FSC like '$FSC' and NIIN like '$NIIN';";
$result_2 = mysqli_query($conn ,$mysql_qry_2);
if(mysqli_num_rows($result_2) > 0) {
$row = mysqli_fetch_assoc($result_2);
$AD_PART_NUMBER = $row["PART_NUMBER"];
if(mysqli_num_rows($result_2) > 1){
echo ",>";
while($row = mysqli_fetch_assoc($result_2)) {
$AD_PART_NUMBER = $row["PART_NUMBER"];
echo $AD_PART_NUMBER, ", ";
}
} else {
echo ",> | NO ADDITIONAL INFO FOUND | ";
}
} else {
echo ",> | NO ADDITIONAL INFO FOUND | ";
}
mysqli_close($con);
?>
So my question here is how can I improve the read speed with the available resources I have or is there an issue with my current PHP document that is causing the bottle neck here?
Instead of using LIKE you would get much faster reads by selecting a specific column that was indexed.
SELECT * FROM table_name FORCE INDEX (index_list) WHERE condition;
The other thing that speeds up Mysql greatly is the use of an SSD drive on the VPS server. A SSD drive will greatly decrease the amount of time it takes to scan a database that large.

How to manage PHP memory?

I wrote a one-off script that I use to parse PDFs saved on the database. So far it is working okay until I ran out of memory after parsing 2,700+ documents.
The basic flow of the script is as follows:
Get a list of all the document IDs to be parsed and save it as an array in the session (~155k documents).
Display a page that has a button to start parsing
Make an AJAX request when that button is clicked that would parse the first 50 documents in the session array
$files = $_SESSION['files'];
$ids = array();
$slice = array_slice($files, 0, 50);
$files = array_slice($files, 50, null); // remove the 50 we are parsing on this request
if(session_status() == PHP_SESSION_NONE) {
session_start();
}
$_SESSION['files'] = $files;
session_write_close();
for($i = 0; $i < count($slice); $i++) {
$ids[] = ":id_{$i}";
}
$ids = implode(", ", $ids);
$sql = "SELECT d.id, d.filename, d.doc_content
FROM proj_docs d
WHERE d.id IN ({$ids})";
$stmt = oci_parse($objConn, $sql);
for($i = 0; $i < count($slice); $i++) {
oci_bind_by_name($stmt, ":id_{$i}", $slice[$i]);
}
oci_execute($stmt, OCI_DEFAULT);
$cnt = oci_fetch_all($stmt, $data);
oci_free_statement($stmt);
# Do the parsing..
# Output a table row..
The response to the AJAX request typically includes a status whether the script has finished parsing the total ~155k documents - if it's not done, another AJAX request is made to parse the next 50. There's a 5 second delay between each request.
Questions
Why am I running out of memory when I was expecting that peak memory usage would be when I get a list of all the document IDs on #1 since it holds all of the possible documents not a few minutes later when the session array holds 2,700 elements less?
I saw a few questions similar to my problem and they suggested to either set the memory to unlimited which I don't want to do at all. The others suggested to set my variables to null when appropriate and I did that but I still ran out of memory after parsing ~2,700 documents. So what other approaches should I try?
# Freeing some memory space
$batch_size = null;
$with_xfa = null;
$non_xfa = null;
$total = null;
$files = null;
$ids = null;
$slice = null;
$sql = null;
$stmt = null;
$objConn = null;
$i = null;
$data = null;
$cnt = null;
$display_class = null;
$display = null;
$even = null;
$tr_class = null;
So I'm not really sure why but reducing the number of documents I'm parsing from 50 down to 10 for each batch seems to fix the issue. I've gone past 5,000 documents now and the script is still running. My only guess is that when I was parsing 50 documents I must have encountered a lot of large files which used up all of the memory allotted.
Update #1
I got another error about memory running out at 8,500+ documents. I've reduced the batches further down to 5 documents each and will see tomorrow if it goes all the way to parsing everything. If that fails, I'll just increase the memory allocated temporarily.
Update #2
So it turns out that the only reason why I'm running out of memory is that we apparently have multiple PDF files that are over 300MB uploaded on to the database. I increased the memory allotted to PHP to 512MB and this seems to have allowed me to finish parsing everything.

Retrieve all rows from table in doctrine

I have table with 100 000+ rows, and I want to select all of it in doctrine and to do some actions with each row, in symfony2 with doctrine I try to do with this query:
$query = $this->getDefaultEntityManager()
->getRepository('AppBundle:Contractor')
->createQueryBuilder('c')
->getQuery()->iterate();
foreach ($query as $contractor) {
// doing something
}
but then I get memory leak, because I think It wrote all data in memory.
I have more experience in ADOdb, in that library when I do so:
$result = $ADOdbObject->Execute('SELECT * FROM contractors');
while ($arrRow = $result->fetchRow()) {
// do some action
}
I do not get any memory leak.
So how to select all data from the table and do not get memory leak with doctrine in symfony2 ?
Question EDIT
When I try to delete foreach and just do iterate, I also get memory leak:
$query = $this->getDefaultEntityManager()
->getRepository('AppBundle:Contractor')
->createQueryBuilder('c')
->getQuery()->iterate();
The normal approach is to use iterate().
$q = $this->getDefaultEntityManager()->createQuery('select u from AppBundle:Contractor c');
$iterableResult = $q->iterate();
foreach ($iterableResult as $row) {
// do something
}
However, as the doctrine documentation says this can still result in errors.
Results may be fully buffered by the database client/ connection allocating additional memory not visible to the PHP process. For large sets this may easily kill the process for no apparant reason.
The easiest approach to this would be to simply create smaller queries with offsets and limits.
//get the count of the whole query first
$qb = $this->getDefaultEntityManager();
$qb->select('COUNT(u)')->from('AppBundle:Contractor', 'c');
$count = $qb->getQuery()->getSingleScalarResult();
//lets say we go in steps of 1000 to have no memory leak
$limit = 1000;
$offset = 0;
//loop every 1000 > create a query > loop the result > repeat
while ($offset < $count){
$qb->select('u')
->from('AppBundle:Contractor', 'c')
->setMaxResults($limit)
->setFirstResult($offset);
$result = $qb->getQuery()->getResult();
foreach ($result as $contractor) {
// do something
}
$offset += $limit;
}
With this heavy datasets this will most likely go over the maximum execution time, which is 30 seconds by default. So make sure to manually change set_time_limit in your php.ini. If you just want to update all datasets with a known pattern, you should consider writing one big update query instead of looping and editing the result in PHP.
Try using this approach:
foreach ($query as $contractor) {
// doing something
$this->getDefaultEntityManager()->detach($contractor);
$this->getDefaultEntityManager()->clear($contractor);
unset($contractor); // tell to the gc the object is not in use anymore
}
Hope this help
If you really need to get all the records, I'd suggest you to use database_connection directly. Look at its interface and choose method which won't load all the data into memory (and won't map the records to your entity).
You could use something like this (assuming this code is in controller):
$db = $this->get('database_connection');
$query = 'select * from <your_table>';
$sth = $db->prepare($query);
$sth->execute();
while($row = $sth->fetch()) {
// some stuff
}
Probably it's not what you need because you might want to have objects after handling all the collection. But maybe you don't need the objects. Anyway think about this.

PHP array inserting / manipulation degrading over iterations

I am in the process of transferring data from one database to another. They are different dbs (mssql to mysql) so I cant do direct queries and am using PHP as an intermediary. Consider the following code. For some reason, each time it goes through the while loop it takes twice as much time as the time before.
$continue = true;
$limit = 20000;
while($continue){
$i = 0;
$imp->endTimer();
$imp->startTimer("Fetching Apps");
$qry = "THIS IS A BASIC SELECT QUERY";
$data = $imp->src->dbQuery($qry, array(), PDO::FETCH_ASSOC);
$inserts = array();
$continue = (count($data) == $limit);
$imp->endTimer();
$imp->startTimer("Processing Apps " . memory_get_usage() );
if($data == false){
$continue = false;
}
else{
foreach($data AS $row){
// THERE IS SOME EXTREMELY BASIC IF STATEMENTS HERE
$inserts[] = array(
"paymentID"=>$paymentID,
"ticketID"=>$ticketID,
"applicationLink"=>$row{'ApplicationID'},
"paymentLink"=>(int)($paymentLink),
"ticketLink"=>(int)($ticketLink),
"dateApplied"=>$row{'AddDate'},
"appliedBy"=>$adderID,
"appliedAmount"=>$amount,
"officeID"=>$imp->officeID,
"customerID"=>-1,
"taxCollected"=>0
);
$i++;
$minID = $row{'ApplicationID'};
}
}
$imp->endTimer();
$imp->startTimer("Inserting $i Apps");
if(count($inserts) > 0){
$imp->dest->dbBulkInsert("appliedPayments", $inserts);
}
unset($data);
unset($inserts);
echo "Inserted $i Apps<BR>";
}
It doesn't matter what I set the limit to, the processing portion takes twice as long each time. I am logging each portion of the loop and selecting the data from the old database and inserting it into the new one take no time at all. The "processing portion" is doubling every time. Why? Here are the logs, if you do some quick math on the timestamps, each step labeled "Processing Apps" takes twice as long as the one before... (I stopped it a little early on this one, but it was taking a significantly longer time on the final iteration)
Well - so I don't know why this works, but if I move everything inside the while loop into a separate function, it DRAMATICALLY increases performance. Im guessing its a garbage collection / memory management issue and that having a function call end helps the Garbage collector know it can release the memory. Now when I log the memory usage, the memory usage stays constant between calls instead of growing... Dirty php...

Memcached high insert/update cause the deadlocks

I am recording unique page views using memcached and storing them in db at 15 mins interval. Whenever number of users grow memcached gives me following error:
Memcache::get(): Server localhost (tcp 10106) failed with: Failed reading line from stream (0)
I am using following code for insert/update page views in memcached
if($memcached->is_valid_cache("visiors")) {
$log_views = $memcached->get_cache("visiors");
if(!is_array($log_views)) $log_views = array();
}
else {
$log_views = array();
}
$log_views[] = array($page_id, $time, $other_Stuff);
$memcached->set_cache("visiors", $log_views, $cache_expire_time);
Following code retrieves the array from memcached, updates the X number of page views in db and sets the remaining page views in memcached
if($memcached->is_valid_cache("visiors")) {
$log_views = $memcached->get_cache("visiors");
if(is_array($log_views) && count($log_views) > 0) {
$logs = array_slice($log_views, 0, $insert_limit);
$insert_array = array();
foreach($logs as $log) {
$insert_array[] = '('. $log[0]. ',' . $log[1] . ', NOW())';
}
$insert_sql = implode(',',$insert_array);
if(mysql_query('INSERT SQL CODE')) {
$memcached->set_cache("visiors", array_slice($log_views, $insert_limit), $cache_expire_time); //store new values
}
}
}
The insert/update cause thread locking because I can see lots of script in waiting for their turn. I think I am losing page views during the update process. Any suggestions how to avoid memcached reading errors and make this code perfect?
You are likely running into a connection limit within memcached, your firewall, network, etc. We have a simple walk through on the most common scenarios: http://code.google.com/p/memcached/wiki/Timeouts
There's no internal locking that would cause sets or gets to block for any amount of time.

Categories