I am running the cron to update the inventory in our database from the ERP inventory csv file. The ERP inventory CSV file contains the 19K record almost. Cron will pick up all records 1 by 1 and update the matched inventory in the database. But since a few days among 19K records 13K-14k records only parse by files and script break in the middle.
I have tried to run the script directly from browser also but its raised the same issue. No error is displayed in the error log.
I was thinking that its timeout issue and increases the max_execution_time to 1500 (25min). But the issue is not resolved yet.
Anyone can suggest me how to solve this issue? Thanks in Advance!
Did you check cron-log? Which operating system you are using?
No error is displayed in the error log.
Then your first course of action is to verify that your error logging is working as expected. If it is a (PHP) timeout or a memory limit issue then the reason will be getting flagged - but it might not be getting reported.
You forgot to tell us how cron runs the task - is this via the CLI SAPI or are you using a http client (wget, curl etc) to invoke via the webserver? The SAPIs have very different behaviours and usually use seperate php.ini files.
I was thinking that its timeout issue
Because you've checked and it always bombs out at the same interval after starting?
But since a few days among 19K records 13K-14k records only parse
And previously it was taking less than the identified amount of time to complete?
and increases the max_execution_time
How? In the script? In the (right) php.ini? Note that if the script is running via the webserver, then there may also be a timeout configured on the webserver.
You might consider prefixing your script with:
set_time_limit(3000);
error_reporting(E_ALL);
ini_set('display_errors', 1);
ini_set('display_startup_errors', 1);
And capturing the stderr and stdout from the script.
Try to divide csv in 2 files and execute the cron i think your csv file have a problem. if one of them execute without trouble then the process have not issue, repeat process to find the block with problem.
a few years i work with a interface wich read a file to export to SAP, sometimes one special character make the script break.
Related
You'll have to excuse my lack of details in regards to this question as I am still trying to work out what's going on.
I understand there may not be a straight answer to this but any help I can get will help me further debug the issue.
My issue is that all of a sudden my PHP script will exit and display a white page. No PHP or MySQL errors on the page and none in the error logs.
The issue occurs at very random times. When it does occur, it "appears" to be when there are a large number of MySQL queries are run at one time. When I say large, it might be a few hundred when sending out emails. Sometimes thousands, if a large import is occurring.
The last time this issue happened was last night when a user tried to send out 118 SMS Messages. After each SMS was queue and also stored in the archive, there would have been roughly a couple hundred queries.
I tried to replicate the issue today when trying to send 125 and 250 SMS Messages on two different occasions. Both worked fine. I then tried sending 250 SMS Messages and 250 emails and also worked fine.
I am using Amazon Elastic Beanstalk for my PHP pages and RDS for my MySQL database.
Does this sound like a PHP or MySQL issue? And if neither are giving me anything in the error logs, do you have any suggestions as to what I can do to further debug this? Are there some other hidden logs or logging I should turn on?
Or is there any MySQL or PHP settings I should look at to try get around the issue?
Configuration side:
First, look into the server's error log (it is different from PHP error log). For example, apache has its own log files, related to the startup of different modules/server messages and etc. PHP's error log is a separate log, so if there are no messages there - it doesn't mean anything.
Second, look into php.ini and see your log settings - which level of errors are written.
Program side:
First, split your code, so that it processes a maximum of 50 records per run. Redo your scripts so that it runs and re-runs until executes all necessary actions.
Second, look into time/memory limits - are they sufficient to execute your operations? Say, sending mail takes 1 second, if your time limit is 30 seconds - you can only send a maximum of 30 emails. It is related to the first part, since you want to partition your tasks into segments which can be safely executed within the provided limits.
If this helps anyone, the issue ending up being my DNS provider (Route S3). Even though I had increased my time limit on my PHP (max_execution_time), my DNS provider had a time limit of 60 seconds. So as soon as the 60 seconds ticked past, it killed it. That's why I didn't get any errors.
I've increased this limit but will also be relooking at my code :)
I am working on a script where I need to find if there's and updation on page went on Job URLs in my database. i.e if any job is posted/page updated etc for the pages which have their URLs stored in my database. I am trying to fetch headers of those pages and checking if their last-modified date is more than stored in my database or content-length is more or less than stored in my database (once i fetch the last modified date and content -length and when script run again it compare records for each URL)
The script is working fine on my local but the problem is when it run on bluehost server it is breaking after uncertain record or amount of time and showing error [an error occurred while processing this directive]. (Its when i try to trigger script from my browser.) and when I let it run from Cron it never return anything (I have added a script to send me a mail once the script run fully. with or without errors. I am bypassing the errors if any in-case script is not able to make updation in record (which is around 15 records).
Any one know what could be the error? I was earlier using wget --delete-after command and now using php -f and my cron is on dedicated IP. Execution time could be around 15-20 min.
PHP configs do have provisions for max execution time and also resources they are permitted to use e.g. max memory.
You can set these programatically.
See http://www.php.net/set_time_limit
It is not possible to set the limit beyond the environment max_execution_time - that is set in the php configs.
If you are on a shared server - these limits are often set quite aggressively to ensure resources do not get monopolised.
I have a script that updates my database with listings from eBay. The amount of sellers it grabs items from is always different and there are some sellers who have over 30,000 listings. I need to be able to grab all of these listings in one go.
I already have all the data pulling/storing working since I've created the client side app for this. Now I need an automated way to go through each seller in the DB and pull their listings.
My idea was to use CRON to execute the PHP script which will then populate the database.
I keep getting Internal Server Error pages when I'm trying to execute a script that takes a very long time to execute.
I've already set
ini_set('memory_limit', '2G');
set_time_limit(0);
error_reporting(E_ALL);
ini_set('display_errors', true);
in the script but it still keeps failing at about the 45 second mark. I've checked ini_get_all() and the settings are sticking.
Are there any other settings I need to adjust so that the script can run for as long as it needs to?
Note the warnings from the set_time_limit function:
This function has no effect when PHP is running in safe mode. There is no workaround other than turning off safe mode or changing the time limit in the php.ini.
Are you running in safe mode? Try turning it off.
This is the bigger one:
The set_time_limit() function and the configuration directive max_execution_time only affect the execution time of the script itself. Any time spent on activity that happens outside the execution of the script such as system calls using system(), stream operations, database queries, etc. is not included when determining the maximum time that the script has been running. This is not true on Windows where the measured time is real.
Are you using external system calls to make the requests to eBay? or long calls to the database?
Look for particularly long operations by profiling your php script, and looking for long operations (> 45 seconds). Try to break those operations into smaller chunks.
Well, as it turns out, I overlooked the fact that I was testing the script through the browser. Which means Apache was handling the PHP process, which was executed with mod_fcgid, which had a timeout of exactly 45 seconds.
Executing the script directly from shell and CRON works just fine.
i have a script that load a csv by CURL, once i have the csv it add each of the records to the database and when finished, display the total amount of registries added.
on less than 500 registries, it execute just fine, the problem is that whennever the amount of registries is too big, the execution is interrupted at some point and the browser displays the download dialog with a file named like the last part of my url withouth extension containing nothing. no warning, error or any kind of message. the database shows that it added some of the registries, if i run the script several times it adds a small amount more.
i have tried to look for someone with a similar situation but haven't find it yet.
i would appreciate any insight in the matter, i'm not sure if this is a symfony2 problem, a server configuration problem or what.
thanks in advance
Probably your script is reaching the maximum php execution time which is by default 30 secs. You can change it in the controller doing the lengthy operation with the php set_time_limit() function. For example:
set_time_limit (300); //300 seconds = 5 minutes
That's more a limitation of your webserver/environment PHP is running in.
Increase max_execution_time to allow your webserver running the request longer - alternative would be writing a console command, the cli environment isn't restricted in many cases.
I am using a script with set_time_limit(60*60*24) to process a big amount of images. But after 1k images or so (1 or 2 minutes), the script stops without showing any errors in the command line.
I'm also using a logger that writes to a file any error thrown by the script, on shutdown (by using register_shutdown_function). But when this script stops, nothing is written (it should write something, even if no errors are thrown. It works perfect with any other script, on any other situation I ever had).
Apache error_log doesn't show anything either.
Any ideas?
Edit: My enviroment is Centos 5.5, with php 5.3.
It is probably running out of memory.
ini_set('memory_limit', '1024M');
May get you going if you can allocate that much.
Please make sure you're not running in safe mode:
http://php.net/manual/en/features.safe-mode.php
Please note that register_shutdown_function does NOT guaranties that the associated function will be executed everytime. So you should not rely on it.
see http://php.net/register_shutdown_function
To debug the issue check the PHP error log. (which is NOT the apache error log when you're using PHP from the console. check your PHP.ini or ini_get('error_log') to know where it is.)
A solution may be to write a simple wrapper script in bash that executes the script and then does what you want to be executed at the end of the script.
Also note that PHP doesn't count the time spent in external, non-php, activities, like network calls, some libraries functions, image magick, etc.
So the time limit you set may actually last much longer than you expect it to.