Why PHP repeatedly being executed while in cronjob? - php

I setup a cronjob at CentOS 6.4 (final) server (with PHP 5.5.9 and Apache httpd 2.4.4) installed):
30 15 * * * wget "http://10.15.1.2/calc.php" -O /dev/null
calc.php ftps several servers to download serveral logs files (using PHP built-in ftp functions), insert log records in logfiles into a single temp table, then count different-date log records, lastly insert counted results into another summary table. It's a very simple program.
calc.php starts at 15:30 and end at 15:46 (calc.php will write the time to log). When I check the log, I find calc.php is being executed again at 15:45. (almost when first-run near ends.) I've double check my calc.php. In the main logic, I doesn't use any loop statements, e.g. while, do-while, or for, etc. All tasks mentioned above are written as functions in same program file. I run same url on browser many times, they all work normally.
So what could be the reason that causes repeated execution while running in cron job?
Here's the main logic part (Note: myxxxx() are my own simple display-message functions. USEARRAY_MODE and TESTINSTMP_MODE aren't defined while running) :
myassess("Being-calcuated logfile list=\n".xpr($srclog_fn_list));
if (defined('USEARRAY_MODE')) {
// Insert outer srclog into temp array
if (false === ($srclog_calc_date = insert_srclog_to_array($srclog_get_date, $srclog_dir, $srclog_delta_days, $srclog_fn_list, $logtmp_sfile, $logtmp_dfile, $logtmp_result))) {
myerror("Insert outer srclog to temp array has problem!");
exit;
}
myassess("Actually calculated date = $srclog_calc_date.");
// If testinstemp mode, we stop here without doing statistics
if (defined('TESTINSTMP_MODE')) {
myinfo("Skipping statistics in testinstmp mode.");
exit;
}
// Do statistics and then insert into summary table
$total_cnt_list = do_statistics_from_array($_cnt_g2s_array, $userlist_tempfile, $rule_file, $srclog_calc_date, $logtmp_result);
} else { // use table
// Insert outer srclog 入 temp array
if (false === ($srclog_calc_date = insert_srclog_to_table($srclog_get_date, $srclog_dir, $srclog_delta_days, $srclog_fn_list))) {
myerror("Insert outer srclog to temp table has problem!");
exit;
}
myassess("Actually calculated date = $srclog_calc_date.");
// If testinstemp mode, we stop here without doing statistics
if (defined('TESTINSTMP_MODE')) {
myinfo("Skipping statistics in testinstmp mode.");
exit;
}
// Do statistics and then insert into summary table
$total_cnt_list = do_statistics_from_table($_cnt_g2s_array, $userlist_tempfile, $rule_file, $srclog_calc_date);
}
if (false === $total_cnt_list)
myerror("Calculate/Write outer summary has problem!");
else {
myinfo("Outer srclog actually calculated date = $srclog_calc_date.");
myinfo("Total summary count = ".array_sum($total_cnt_list));
myinfo("Insert-ok summary count = ".$total_cnt_list[0]);
myinfo("!!!Insert-fail summary count = ".$total_cnt_list[1]);
}
#mysql_close($wk_dbconn);
#oci_close($uodb_conn);
myinfo("### Running end at ".date(LOG_DATEFORMAT1).".");
myinfo("### total exectution time:".elapsed_time($_my_start_time, microtime(true)));
myinfo("############ END program ############");
exit;

Related

What is the proper way to monitor PHP execution in the frontend?

I will use an example to demonstrate this.
Assuming I have a MySQL DB where I place paths to files to be uploaded to S3, and a status column where each file is attributed either a pending or uploaded string.
I have a PHP script, upload.php, which I can run with php upload.php and receive the output logged to my terminal as the script progresses. I would like to set up a cron job that runs the script at certain intervals, say every 30 minutes, where each time the DB is queried and the files which hold a pending status are processed for upload.
Now, I want to be able to track the progress of the script, regardless of its current status in the frontend (if currently no pending items are in the DB).
While I would appreciate any specific suggestion on how to do this, my question is also regarding best practice - meaning, what is the proper way to do this?
Here's an example of a script of such (it's using the Joshcam MysqliDb)
// Get items with a pending status
function get_items_queue() {
global $db;
$cols = Array ("id", "filename");
$db->where('status = "pending"');
return $db->get('files', null, $cols);
}
// Upload items to S3
function UploadToS3($filename) {
if (empty($filename)) {
return false;
}
include_once('/s3/aws-autoloader.php');
$s3 = new S3Client($somearray); // Some S3 credentials here
// Print status
echo $filename . ' is uploading';
$uploaded = $s3->putObject($somearray); // Uploading to S3
if ($s3->doesObjectExist($s3_bucket, $filename)) {
// Print status
echo $filename . ' was uploaded';
} else {
// Print status
echo 'There has been an issue while uploading ' . $filename;
}
}
// Run the script
$queue_items = get_items_queue();
foreach ($queue_items as $key => $item) {
$upload = UploadToS3($item['filename']);
// Some function here that changes the status column for the uploaded item to 'uploaded'
if ($upload) {
set_item_queue_status($item['id']);
}
}
I ended up setting an installation of Cronicle from jhuckaby.
Essentially a cron manager, but what's most important for my case is the live log-viewer. This enables me to run the script using a cron job at the intervals I defined, and watch as it executes via the log-viewer, while being able to leave and come back at any point to view the currently running task (or any of the previous tasks that ran while I was away).

php exec() never ended

I'm using php exec() to run a executable file and it seems never ended.
But running this executable file in shell is ok
Here's the main things the executable file do:
fork();
child process does some time-wasting things.
And I setrlimit a CPU time
In parent process: listen signals and kill child process when the used_time calculated exceeds limit
How can I do to make php exec() work?
Update:
because the code is too long,I just select some of them
main function
child_pid = fork();
if(child_pid == 0)
{
compile();
exit(0);
}
else
{
int res = watch();
if(res)
puts("YES");
else
puts("NO");
}
child process
LIM.rlim_cur = LIM.rlim_max = COMPILE_TIME;
setrlimit(RLIMIT_CPU,&LIM);
alarm(0);
alarm(LIM.rlim_cur * 10);
switch(language)
{
//..... here is execl() to call compiler like gcc,g++,javac
}
parent process
int status = 0;
int used_time = 0;
struct timeval case_startv, case_nowv;
struct timezone case_startz, case_nowz;
gettimeofday(&case_startv, &case_startz);
while(1)
{
usleep(50000);
kill(child_pid,SIGKILL);
gettimeofday(&case_nowv, &case_nowz);
used_time = case_nowv.tv_sec - case_startv.tv_sec;
if(waitpid(child_pid,&status,WNOHANG) == 0) //still running
{
if(used_time > COMPILE_TIME)
{
report_log("Compile time limit exceed");
kill(child_pid,SIGKILL);
return 0;
}
}
else
{
//handle signals
}
}
For test,just the function exec() in php file
The situation what i said only occurred when :
use php exec() run the executable file to compile user code like:
#include "/dev/random"
//....
Php script on server has limited time to execute. It is generally not a good idea to execute long running scripts this way. It is recommended that they be run as background jobs.
Thi is defined in php.ini which is different for apache and shell
At last, I find out why this happened..
I just kill childpid but not kill other process cause by childpid
So php exec() will always run

A portable way of providing an IP-based cooldown period?

I have a PHP API front end running on a webserver. This specific PHP program is subject to distribution, thus it should be as portable as possible.
The feature I want to implement is an IP cooldown period, meaning that the same IP can only request the API a maximum of two times per second, meaning at least a 500ms delay.
The approach I had in mind is storing the IP in an MySQL database, along with the latest request timestamp. I get the IP by:
if (getenv('REMOTE_ADDR'))
$ipaddress = getenv('REMOTE_ADDR');
But some servers might not have a MySQL database or the user installling this has no access. Another issue is the cleanup of the database.
Is there a more portable way of temporarily storing the IPs (keeping IPv6 in mind)?
and
How can I provide an automatic cleanup of IPs that are older than 500ms, with the least possible performance impact?
Also: I have no interest at looking at stored IPs, it is just about the delay.
This is how I solved it for now, using a file.
Procedure
Get client IP and hash it (to prevent file readout).
Open IP file and scan each line
Compare the time of the current record to the current time
If difference is greater than set timeout goto 5., else 7.
If IP matches client, create updated record, else
drop record.
If IP matches client, provide failure message, else copy record.
Example code
<?php
$sIPHash = md5($_SERVER[REMOTE_ADDR]);
$iSecDelay = 10;
$sPath = "bucket.cache";
$bReqAllow = false;
$iWait = -1;
$sContent = "";
if ($nFileHandle = fopen($sPath, "c+")) {
flock($nFileHandle, LOCK_EX);
$iCurLine = 0;
while (($sCurLine = fgets($nFileHandle, 4096)) !== FALSE) {
$iCurLine++;
$bIsIPRec = strpos($sCurLine, $sIPHash);
$iLastReq = strtok($sCurLine, '|');
// this record expired anyway:
if ( (time() - $iLastReq) > $iSecDelay ) {
// is it also our IP?
if ($bIsIPRec !== FALSE) {
$sContent .= time()."|".$sIPHash.PHP_EOL;
$bReqAllow = true;
}
} else {
if ($bIsIPRec !== FALSE) $iWait = ($iSecDelay-(time()-$iLastReq));
$sContent .= $sCurLine.PHP_EOL;
}
}
}
if ($iWait == -1 && $bReqAllow == false) {
// no record yet, create one
$sContent .= time()."|".$sIPHash.PHP_EOL;
echo "Request from new user successful!";
} elseif ($bReqAllow == true) {
echo "Request from old user successful!";
} else {
echo "Request failed! Wait " . $iWait . " seconds!";
}
ftruncate($nFileHandle, 0);
rewind($nFileHandle);
fwrite($nFileHandle, $sContent);
flock($nFileHandle, LOCK_UN);
fclose($nFileHandle);
?>
Remarks
New users
If the IP hash doesn't match any record, a new record is created. Attention: Access might fail if you do not have rights to do that.
Memory
If you expect much traffic, switch to a database solution like this all together.
Redundant code
"But minxomat", you might say, "now each client loops through the whole file!". Yes, indeed, and that is how I want it for my solution. This way, every client is responsible for the cleanup of the whole file. Even so, the performance impact is held low, because if every client is cleaning, file size will be kept at the absolute minimum. Change this, if this way doesn't work for you.

php fgetcsv multiple lines not only one or all

I wand to read biiiiig CSV-Files and want to insert them into a database. That already works:
if(($handleF = fopen($path."\\".$file, 'r')) !== false){
$i = 1;
// loop through the file line-by-line
while(($dataRow = fgetcsv($handleF,0,";")) !== false) {
// Only start at the startRow, otherwise skip the row.
if($i >= $startRow){
// Check if to use headers
if($lookAtHeaders == 1 && $i == $startRow){
$this->createUberschriften( array_map(array($this, "convert"), $dataRow ) );
} else {
$dataRow = array_map(array($this, "convert"), $dataRow );
$data = $this->changeMapping($dataRow, $startCol);
$this->executeInsert($data, $tableFields);
}
unset($dataRow);
}
$i++;
}
fclose($handleF);
}
My problem of this solution is, that it's very slow. But the files are too big to put it directly into the memory... So I wand to ask, if there a posibility to read, for example 10 lines, into the $dataRow array not only one or all.
I want to get a better balance between the memory and the performance.
Do you understand what i mean? Thanks for help.
Greetz
V
EDIT:
Ok, I still have to try to find a solution with the MSSQL-Database. My solution was to stack the data and than make a multiple-MSSQL-Insert:
while(($dataRow = fgetcsv($handleF,0,";")) !== false) {
// Only start at the startRow, otherwise skip the row.
if($i >= $startRow){
// Check if to use headers
if($lookAtHeaders == 1 && $i == $startRow){
$this->createUberschriften( array_map(array($this, "convert"), $dataRow ) );
} else {
$dataRow = array_map(array($this, "convert"), $dataRow );
$data = $this->changeMapping($dataRow, $startCol);
$this->setCurrentRow($i);
if(count($dataStack) > 210){
array_push($dataStack, $data);
#echo '<pre>', print_r($dataStack), '</pre>';
$this->executeInsert($dataStack, $tableFields, true);
// reset the stack
unset($dataStack);
$dataStack = array();
} else {
array_push($dataStack, $data);
}
unset($data);
}
$i++;
unset($dataRow);
}
}
Finaly I have to loop the Stack and build in mulitiple Insert in the method "executeInsert", to create a query like this:
INSERT INTO [myTable] (field1, field2) VALUES ('data1', 'data2'),('data2', 'datta3')...
That works much better. I still have to check the best balance, but therefor i can change only the value '210' in the code above. I hope that help's everybody with a similar problem.
Attention: Don't forget to execute the method "executeInsert" again after readin the complete file, because it could happen that there are still some data in the stack and the method will only be executed when the stack reach the size of 210....
Greetz
V
I think your bottleneck is not reading the file. Which is a text file. Your bottleneck is the INSERT in the SQL table.
Do something, just comment the line that actually do the insert and you will see the difference.
I had this same issue in the past, where i did exactly what you are doing. reading a 5+ million lines CSV and inserting it in a Mysql table. The execution time was 60 hours which is
unrealistic.
My solutions was switch to another db technology. I selected MongoDB and the execution time
was reduced to 5 minutes. MongoDB performs really fast on this scenarios and also have a tool called mongoimport that will allow you to import a csv file firectly from the command line.
Give it a try if the db technology is not a limitation on your side.
Another solution will be spliting the huge CSV file into chunks and then run the same php script multiple times in parallel and each one will take care of the chunks with an specific preffix or suffix on the filename.
I don't know which specific OS are you using, but in Unix/Linux there is a command line tool
called split that will do that for you and will also add any prefix or suffix you want to the filename of the chunks.

Best practice: Import mySQL file in PHP; split queries

I have a situation where I have to update a web site on a shared hosting provider. The site has a CMS. Uploading the CMS's files is pretty straightforward using FTP.
I also have to import a big (relative to the confines of a PHP script) database file (Around 2-3 MB uncompressed). Mysql is closed for access from the outside, so I have to upload a file using FTP, and start a PHP script to import it. Sadly, I do not have access to the mysql command line function so I have to parse and query it using native PHP. I also can't use LOAD DATA INFILE. I also can't use any kind of interactive front-end like phpMyAdmin, it needs to run in an automated fashion. I also can't use mysqli_multi_query().
Does anybody know or have a already coded, simple solution that reliably splits such a file into single queries (there could be multi-line statements) and runs the query. I would like to avoid to start fiddling with it myself due to the many gotchas that I'm likely to come across (How to detect whether a field delimiter is part of the data; how to deal with line breaks in memo fields; and so on). There must be a ready made solution for this.
Here is a memory-friendly function that should be able to split a big file in individual queries without needing to open the whole file at once:
function SplitSQL($file, $delimiter = ';')
{
set_time_limit(0);
if (is_file($file) === true)
{
$file = fopen($file, 'r');
if (is_resource($file) === true)
{
$query = array();
while (feof($file) === false)
{
$query[] = fgets($file);
if (preg_match('~' . preg_quote($delimiter, '~') . '\s*$~iS', end($query)) === 1)
{
$query = trim(implode('', $query));
if (mysql_query($query) === false)
{
echo '<h3>ERROR: ' . $query . '</h3>' . "\n";
}
else
{
echo '<h3>SUCCESS: ' . $query . '</h3>' . "\n";
}
while (ob_get_level() > 0)
{
ob_end_flush();
}
flush();
}
if (is_string($query) === true)
{
$query = array();
}
}
return fclose($file);
}
}
return false;
}
I tested it on a big phpMyAdmin SQL dump and it worked just fine.
Some test data:
CREATE TABLE IF NOT EXISTS "test" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"name" TEXT,
"description" TEXT
);
BEGIN;
INSERT INTO "test" ("name", "description")
VALUES (";;;", "something for you mind; body; soul");
COMMIT;
UPDATE "test"
SET "name" = "; "
WHERE "id" = 1;
And the respective output:
SUCCESS: CREATE TABLE IF NOT EXISTS "test" ( "id" INTEGER PRIMARY KEY AUTOINCREMENT, "name" TEXT, "description" TEXT );
SUCCESS: BEGIN;
SUCCESS: INSERT INTO "test" ("name", "description") VALUES (";;;", "something for you mind; body; soul");
SUCCESS: COMMIT;
SUCCESS: UPDATE "test" SET "name" = "; " WHERE "id" = 1;
Single page PHPMyAdmin - Adminer - Just one PHP script file.
check : http://www.adminer.org/en/
When StackOverflow released their monthly data dump in XML format, I wrote PHP scripts to load it into a MySQL database. I imported about 2.2 gigabytes of XML in a few minutes.
My technique is to prepare() an INSERT statement with parameter placeholders for the column values. Then use XMLReader to loop over the XML elements and execute() my prepared query, plugging in values for the parameters. I chose XMLReader because it's a streaming XML reader; it reads the XML input incrementally instead of requiring to load the whole file into memory.
You could also read a CSV file one line at a time with fgetcsv().
If you're inporting into InnoDB tables, I recommend starting and committing transactions explicitly, to reduce the overhead of autocommit. I commit every 1000 rows, but this is arbitrary.
I'm not going to post the code here (because of StackOverflow's licensing policy), but in pseudocode:
connect to database
open data file
PREPARE parameterizes INSERT statement
begin first transaction
loop, reading lines from data file: {
parse line into individual fields
EXECUTE prepared query, passing data fields as parameters
if ++counter % 1000 == 0,
commit transaction and begin new transaction
}
commit final transaction
Writing this code in PHP is not rocket science, and it runs pretty quickly when one uses prepared statements and explicit transactions. Those features are not available in the outdated mysql PHP extension, but you can use them if you use mysqli or PDO_MySQL.
I also added convenient stuff like error checking, progress reporting, and support for default values when the data file doesn't include one of the fields.
I wrote my code in an abstract PHP class that I subclass for each table I need to load. Each subclass declares the columns it wants to load, and maps them to fields in the XML data file by name (or by position if the data file is CSV).
Can't you install phpMyAdmin, gzip the file (which should make it much smaller) and import it using phpMyAdmin?
EDIT: Well, if you can't use phpMyAdmin, you can use the code from phpMyAdmin. I'm not sure about this particular part, but it's generaly nicely structured.
Export
The first step is getting the input in a sane format for parsing when you export it. From your question
it appears that you have control over the exporting of this data, but not the importing.
~: mysqldump test --opt --skip-extended-insert | grep -v '^--' | grep . > test.sql
This dumps the test database excluding all comment lines and blank lines into test.sql. It also disables
extended inserts, meaning there is one INSERT statement per line. This will help limit the memory usage
during the import, but at a cost of import speed.
Import
The import script is as simple as this:
<?php
$mysqli = new mysqli('localhost', 'hobodave', 'p4ssw3rd', 'test');
$handle = fopen('test.sql', 'rb');
if ($handle) {
while (!feof($handle)) {
// This assumes you don't have a row that is > 1MB (1000000)
// which is unlikely given the size of your DB
// Note that it has a DIRECT effect on your scripts memory
// usage.
$buffer = stream_get_line($handle, 1000000, ";\n");
$mysqli->query($buffer);
}
}
echo "Peak MB: ",memory_get_peak_usage(true)/1024/1024;
This will utilize an absurdly low amount of memory as shown below:
daves-macbookpro:~ hobodave$ du -hs test.sql
15M test.sql
daves-macbookpro:~ hobodave$ time php import.php
Peak MB: 1.75
real 2m55.619s
user 0m4.998s
sys 0m4.588s
What that says is you processed a 15MB mysqldump with a peak RAM usage of 1.75 MB in just under 3 minutes.
Alternate Export
If you have a high enough memory_limit and this is too slow, you can try this using the following export:
~: mysqldump test --opt | grep -v '^--' | grep . > test.sql
This will allow extended inserts, which insert multiple rows in a single query. Here are the statistics for the same datbase:
daves-macbookpro:~ hobodave$ du -hs test.sql
11M test.sql
daves-macbookpro:~ hobodave$ time php import.php
Peak MB: 3.75
real 0m23.878s
user 0m0.110s
sys 0m0.101s
Notice that it uses over 2x the RAM at 3.75 MB, but takes about 1/6th as long. I suggest trying both methods and seeing which suits your needs.
Edit:
I was unable to get a newline to appear literally in any mysqldump output using any of CHAR, VARCHAR, BINARY, VARBINARY, and BLOB field types. If you do have BLOB/BINARY fields though then please use the following just in case:
~: mysqldump5 test --hex-blob --opt | grep -v '^--' | grep . > test.sql
Can you use LOAD DATA INFILE?
If you format your db dump file using SELECT INTO OUTFILE, this should be exactly what you need. No reason to have PHP parse anything.
I ran into the same problem. I solved it using a regular expression:
function splitQueryText($query) {
// the regex needs a trailing semicolon
$query = trim($query);
if (substr($query, -1) != ";")
$query .= ";";
// i spent 3 days figuring out this line
preg_match_all("/(?>[^;']|(''|(?>'([^']|\\')*[^\\\]')))+;/ixU", $query, $matches, PREG_SET_ORDER);
$querySplit = "";
foreach ($matches as $match) {
// get rid of the trailing semicolon
$querySplit[] = substr($match[0], 0, -1);
}
return $querySplit;
}
$queryList = splitQueryText($inputText);
foreach ($queryList as $query) {
$result = mysql_query($query);
}
Already answered: Loading .sql files from within PHP
Also:
http://webxadmin.free.fr/article/import-huge-mysql-dumps-using-php-only-342.php
http://www.phpbuilder.com/board/showthread.php?t=10323180
http://forums.tizag.com/archive/index.php?t-3581.html
Splitting a query cannot be reliably done without parsing. Here is valid SQL that would be impossible to split correctly with a regular expression.
SELECT ";"; SELECT ";\"; a;";
SELECT ";
abc";
I wrote a small SqlFormatter class in PHP that includes a query tokenizer. I added a splitQuery method to it that splits all queries (including the above example) reliably.
https://github.com/jdorn/sql-formatter/blob/master/SqlFormatter.php
You can remove the format and highlight methods if you don't need them.
One downside is that it requires the whole sql string to be in memory, which could be a problem if you're working with huge sql files. I'm sure with a little bit of tinkering, you could make the getNextToken method work on a file pointer instead.
First at all thanks for this topic. This saved a lot of time for me :)
And let me to make little fix for your code.
Sometimes if TRIGGERS or PROCEDURES is in dump file, it is not enough to examine the ; delimiters.
In this case may be DELIMITER [something] in sql code, to say that the statement will not end with ; but [something]. For example a section in xxx.sql:
DELIMITER //
CREATE TRIGGER `mytrigger` BEFORE INSERT ON `mytable`
FOR EACH ROW BEGIN
SET NEW.`create_time` = NOW();
END
//
DELIMITER ;
So first need to have a falg, to detect, that query does not ends with ;
And delete the unqanted query chunks, because the mysql_query does not need delimiter
(the delimiter is the end of string)
so mysql_query need someting like this:
CREATE TRIGGER `mytrigger` BEFORE INSERT ON `mytable`
FOR EACH ROW BEGIN
SET NEW.`create_time` = NOW();
END;
So a little work and here is the fixed code:
function SplitSQL($file, $delimiter = ';')
{
set_time_limit(0);
$matches = array();
$otherDelimiter = false;
if (is_file($file) === true) {
$file = fopen($file, 'r');
if (is_resource($file) === true) {
$query = array();
while (feof($file) === false) {
$query[] = fgets($file);
if (preg_match('~' . preg_quote('delimiter', '~') . '\s*([^\s]+)$~iS', end($query), $matches) === 1){
//DELIMITER DIRECTIVE DETECTED
array_pop($query); //WE DON'T NEED THIS LINE IN SQL QUERY
if( $otherDelimiter = ( $matches[1] != $delimiter )){
}else{
//THIS IS THE DEFAULT DELIMITER, DELETE THE LINE BEFORE THE LAST (THAT SHOULD BE THE NOT DEFAULT DELIMITER) AND WE SHOULD CLOSE THE STATEMENT
array_pop($query);
$query[]=$delimiter;
}
}
if ( !$otherDelimiter && preg_match('~' . preg_quote($delimiter, '~') . '\s*$~iS', end($query)) === 1) {
$query = trim(implode('', $query));
if (mysql_query($query) === false){
echo '<h3>ERROR: ' . $query . '</h3>' . "\n";
}else{
echo '<h3>SUCCESS: ' . $query . '</h3>' . "\n";
}
while (ob_get_level() > 0){
ob_end_flush();
}
flush();
}
if (is_string($query) === true) {
$query = array();
}
}
return fclose($file);
}
}
return false;
}
I hope i could help somebody too.
Have a nice day!
http://www.ozerov.de/bigdump/ was very useful for me in importing 200+ MB sql file.
Note:
SQL file should be already present in the server so that the process can be completed without any issue
You can use phpMyAdmin for importing the file. Even if it is huge, just use UploadDir configuration directory, upload it there and choose it from phpMyAdmin import page. Once file processing will be close to the PHP limits, phpMyAdmin interrupts importing, shows you again import page with predefined values indicating where to continue in the import.
what do you think about:
system("cat xxx.sql | mysql -l username database");

Categories