The problem
I am using Laravel 5.3 to import a huge (about >1 million rows and >25 columns) tab separated file into mysql database using functions in controller code (I am restraining from posting all the code here). While processing the files I am encountered with the following error:
FatalErrorException in Connection.php line 720:
Maximum execution time of 30 seconds exceeded
Please note that the application is importing a different number of rows for different instances before failing.
Question
I know we can fix this using either of following:
changing php.ini suggested here
adding ini_set('max_execution_time', 300); at the beginning of public/index as suggested here
A varied number of reasons might be behind this and I am more interested in knowing where exactly is it running out of time. Laravel doesn't provide any more details than the above-mentioned. I would really appreciate if someone can provide ways to debug this. Things that would help:
Is the time aggregate of all requests by a method?
Does memory overload cause this?
Will it help by chunking the data and handling it through multiple request?
Environment
Laravel 5.3
Centos 7 on vagrant
MySQL
It's not a specific operation running out of time. It's... everything, combined, from start to finish.
max_execution_time integer
This sets the maximum time in seconds a script is allowed to run before it is terminated by the parser. This helps prevent poorly written scripts from tying up the server. The default setting is 30.
http://php.net/manual/en/info.configuration.php#ini.max-execution-time
The idea, here, is that for a web service, generally speaking, only a certain amount of time from request to response is reasonable. Obviously, if it takes 30 seconds (an arbitrary number for "reasonableness") to return a response to a web browser or from an API, something probably isn't working as intended. A lot of requests tying up server resources would result in a server becoming unresponsive to any subsequent requests, taking the entire site down.
The max_execution_time parameter is a protective control to mitigate the degradation of a site when a script -- for example -- gets stuck in an endless loop or otherwise runs for an unreasonable amount of time. The script execution is terminated, freeing resources that were being consumed, usually in an unproductive way.
Is the time aggregate of all requests by a method?
It's the total runtime time for everything in the script -- not one specific operation.
Does memory overload cause this?
Not typically, except perhaps when the system is constrained for memory and uses a swap file, since swap thrashing can consume a great deal of time.
Will it help by chunking the data and handling it through multiple request?
In this case, yes, it may make sense to work with smaller batches, which (generally speaking) should reduce the runtime. Everything is a tradeoff, as larger batches may or may not be more efficient, in terms of proccessing time per unit of work, which is workload-specific and rarely linear.
So I have a XAMPP setup with Zurmo 2.6.5 running on it. Everything works like a charm. The speed at which it pulls up contacts, goes through pages, etc is considerably fast. I have 2 GB RAM and this is the only web app that runs on it. You can call it dedicated I guess. The problem arises when I attempt to export a fairly decent amount of data to Excel (CSV is the only option available). For e.g, I tried exporting 200-odd rows of data and it timed out due to the max_execution_limit parameter. I increased it first from around 300 to 600, and now finally to 1200. The script keeps running as though there were no end to it :-/.
Surprisingly, when I first apply the filter (not many, just one), it takes around 10-15 seconds to display the first 10 records. That indicates the query executes well within time limits. I have memcached installed, like they suggest to alleviate performance issues.
I checked Zurmo's forums and the net in general, but unfortunately I did not get even a single hit with reference to this issue. Can any fellow Zurmo developer / power user help me get this resolved?
Much appreciated. Thanks.
Hi guys I have a question about server's RAM and PHP/MySQL/Jquery script.
Can scripts kills RAM when script doesn't take extra RAM? (I know it could happen when RAM grow up to maximum or because of memory limit. But it isn't this case.)
I'm testing script but everytime when I do that RAM goes quickly down.
Script doesn't show error for memory limit and it's correctly loading all data. When I don't test script RAM is still down.
In database is a couple records - maybe 350 records in 9 tables (the bigges tables has 147 records).
(I haven't any logs just simply (really simple) graph for running server.)
Thank for your time.
If you're not getting errors in your PHP error log about failing to allocate memory, and you're not seeing other problems with your server running out of RAM (such as extreme performance degradation due to memory pages being written to disk for demand paging) you probably don't need to really worry about it. Any use case where a web server uses up that much memory in a single request is going to be pretty rare.
As for trying to profile the actual memory usage, trying to profile it by watching something like the task manager is going to be pretty unreliable. Most PHP scripts are going to complete in milliseconds, which isn't enough time for the memory allocations to really even register in the task manager.
Even if you have a more reliable method of profiling the memory usage (I don't recall if PHP has built in functions for this, but probably does), bear in mind that memory usage is going to flucuate tremendously for reasons that may be hard to understand. PHP in particular is very high level: you can open a database connection, which involves everything down to the OS opening network sockets, creating internal datastructures, caching things, and much more all in a single line of code. The script may allocate many megabytes of memory for such a thing for a single database row, but may then deallocate it a millisecond later.
Those database sizes are pretty neglibible. Depending on the row sizes it's possibly under a megabyte of data which is a tiny drop in a bucket for memory on anything remotely modern. Don't worry about memory usage for something like that. Only if you see your scripts failing and your error log reports running out of memory should you really worry about it.
I have a very long running script that is doing a pretty significant amount of work with about 30 million database records. Every time I start it with the CLI and let it run, it runs fairly quickly given the amount of work it's doing (about 5k records/minute). However, after about 90 minutes of this, it slows down dramatically, taking 2 hours to complete 5k records. If I restart it, it runs fine again for about 90 minutes.
The Apache log doesn't show anything for the time it slows down, and I'm just not sure where else to look.
Running on the command line, PHP's max execution time shouldn't matter, and I've commented out the spot in CodeIngiter where it sets it to 300; I've even added a set_time_limit(0) to the top of the script.
My database is PostgreSQL.
Any suggestions on where to look?
Edit: Ok, Definitely Seems to be a Memory Problem. But I'm using several arrays to cache results to do batch inserts and updates, but I am clearing them out after and they are used a ton of times before I hit the 90 minute mark.
Is there a way to see what's currently in memory?
Edit: I won't know if this is my solution for another 90 minutes, but If you're a CI user with Memory issues, check out this http://codeigniter.com/forums/viewthread/140012/#689396
You can view your memory usage with this functionmemory_get_usage().
The only things that you can see in memory are the arrays or variables you have delcared. Anything that has been unset(), reused, or nulled out is added for deletion by the garbage collector.
I did PHP import scripts back in the day and what you are probably running into is memory issues. Be sure to empty out arrays once you are done with them.
Depending on how much fun you want to have with your task, this might peak your interest:
http://gearman.org/
Gearman provides a generic application framework to farm out work to other machines or processes that are better suited to do the work.
I have a PHP class that selects data about a file from a MySQL database, processes that data in PHP and then outputs the final data to the command line. Then it moves onto the next file within a foreach loop. ( later I'll be inserting this data into another table ... but that's not important now )
I want to make the processing as fast as possible.
When I run the script and monitor my system using top or iostat:
my cpus are never less than 65% idle ( 4 core EC2 instance )
the PHP script sits at about 45%
mysqld sits at about 8%
my memory usage never passes ~1.5GB ( 8GB of ram total )
there is very little disk IO
What other bottlenecks could be preventing this process from running faster and using the available CPU and Memory?
EDIT 1:
This does not need to be a procedural process and I've designed it to parallelize the processing if necessary. If I can speed it up some, it'd be simpler to leave it as procedural processing.
I've monitored the disk I/O using iostat -x 1 and there is very little.
I need to speed this up in general because it will ultimately be used to process hundreds of millions of files and I'd like it to be as fast as possible as it's part of a larger processing step.
Well, it may be because a single PHP process can only run on one core at a time and you're not loading up your system to the point where it will have four concurrent jobs running continuously.
Example: if PHP were the only thing running on that box, it was inherently tied to a single core per "job" and only one request at a time were being made, I'd fully expect a CPU load of around 25% despite the fact it's already going as fast as it possibly can.
Of course, once that system started ramping up to the point where there are continuously four PHP scripts running, you may find higher CPU utilisation.
In my opinion, you should only really worry about a performance problem if it's an actual problem (such as not being able to keep up with incoming requests). Optimisation just because you want it using more CPU and/or memory resources seems to be looking at it the wrong way around. I would just get it running as fast as possible without worrying about the actual resources used.
If you want to process hundreds of millions of files as fast as possible (as per your update) and PHP is core-bound, you should think about horizontal scaling.
In other words, if the processing of a single file is independent, you can simply start two or three PHP processes and have them process one file each. That will be more likely to get them running on distinct cores.
You can even scale across physical machines if necessary though that's likely to introduce network latency on the DB access (unless the DB is replicated across all the machines as well).
Without a fair bit more detail, the options I can provide will be mostly generic ones.
The first problem you need to fix is the word "bottleneck", because it means everything and nothing.
It conjurs this image of some sort of constriction in the flow of whatever the machine seems to do which is so fast it must be like water running through pipes.
Computation isn't like that.
I find it helps to see how a very simple, slow, computer works, namely Harry Porter's Relay Computer.
You can watch it chug along, at a very slow clock rate, executing every little step within each instruction and finishing them before it starts the next.
(Now, obviously, machines these days are multi-core, pipelined, multi-level cache, blah blah. That's all fine, but that makes you think computation is like water flowing, and that prevents you from understanding software performance.)
Think of any computer and software as just like in that relay machine, except on a scale of nanoseconds, not seconds.
When a computer is calculating in a program, it is executing instructions one after the other. Call that "X".
When a program wants to read or write some bits to external hardware, it has to request that hardware to start, and then it has to find a way to kill time until the result is ready.
Call that "y".
It could be an idle loop, or letting another "thread" run, etc.
So the execution of a program looks like
XXXXXyyyyyyyXXXXXXXXyyyyyyy
If there are more "y"s in there than "X"s we tend to call it "I/O bound".
If not, we might call it "compute bound".
Either way, it's just a matter of proportion of time spent.
If you say it's "memory bound", that's just like I/O except it could be different external hardware.
It still occupies some fraction of the overall sequential timeline.
Now for any given task, there are infinitely many programs that could be written to do it. Some of them will get done in fewer steps than all the others.
When you want performance, you want to get as close as possible to writing one of those programs.
One way to do it is to find "X"s and "y"s that you can get rid of, and get rid of as many as possible.
Now, within a single thread, if you pick an "X" or "y" at random, how can you tell if you can get rid of it?
Find out what it's purpose is!
That "X" or "y" represents a moment in the execution sequence of the program, and if you look at the state of the program at that time, and look at the source code, you will be able to figure out why that moment is being spent.
Do that a few times.
As soon as you see two moments in time having a similar less-than-absolutely-necessary purpose,
there are probably a lot more like them, and you've found something you can get rid of.
If you do so, the program will no longer be spending that time.
That's the basic idea behind this method of performance tuning.
Here's an example where that method was used, over several iterations, to remove over 97% of the time spent in a program.
Not all programs are that far away from optimal.
(Some are much farther.)
Many programs just have to do a certain amount of "X"s or "y"s, and there's no way around it.
Nevertheless, it is often very surprising how much room you can find for speedup in otherwise perfectly good code - provided - you forget about "bottlenecks" and look for steps that it's doing, over time, that could be removed or done better.
It's easy.
I suspect you're spending most of your time communicating with MySQL and reading the files. How are you determining that there's very little IO? Communicating with MySQL is going to be over the network, which is very slow compared to direct memory access. Same with reading files.
Looks like CPU is your bottleneck. Or to be more precise a single core is your bottle neck.
100% utilisation of a single core will result in a "25% CPU utilisation" if the other three cores are idle.
Your numbers are consistent with a php script running at 100% on a single core, with 5 to 10% utilization on the other three cores.
Sorry to resurrect an old thread, but thought this might help someone out.
I had a similar problem and it had to do with a command line script that was throwing numerous 'Notice' warnings. That somehow led to it performing slowly and using less than 10% of the cpu. This behavior only showed up on migrating from MacOS X to Ubuntu, as the default in OSX seems to be to suppress the wornings. Once I fixed the offending code it performed much better, with processes using around 100% cpu consistently.
As the other guy said, sorry to resurrect an old thread, but this may help somebody.
I had the same issue: running a bunch of processes in parallel, all using MySQL. The machine was slow with no identifiable bottlenecks: cpu, memory nor disk.
It turns out that the most probable cause of my problems was that MySQL internal threads were hung on the same semaphore most of the time. Switching from vanilla MySQL 5.5 to MariaDB 10.0 fixed the problem.
Also, to ensure that my machine is always running at full capacity while not being flooded, I have created a Perl script raspawn.pl (on GitHub).
You can read the full sad story here.