When I am importing one of the Demo from the Bridge theme, it is stucked at 90% and showing “The import process may take some time. Please be patient”, I waited for atleast 1 hour but problem remained the same and I also tried in another machine but still it encounters the same issue.
My System Information
PHP Memory Limit 1 GB
PHP Version 7.4.33
PHP Post Max Size 1024M
PHP Time Limit 1500
PHP Max Input Vars 5000
Max Upload Size 1 GB
Related
I have a corpus consists of over 100 million Unicode words and the file size is around 2GB. I wrote a code to count the frequency of each word in the corpus in PHP. The code is running in Mozilla Firefox Browser with XAMPP local server. The code takes 90MB of texts each time and counts frequency and then take the next 90MB(since it was not taking the whole file at a time). When I ran the code in a PC with 6GB RAM and Core 2 Duo processor, it took 2 days to complete 20% of the work. Then, with another PC with 8GB RAM and Core i5 processor it took the same time. Finally, I used a server PC with 32GB RAM and Xeon Silver 4114 Processor it is taking almost the same amount of time. This makes me realize something is restricting using resources from PC. I don't get it. Is there any speed limitation on browsers or local server? Please help.
I am using WPMU installation and trying to import listing in my site.
I started with ns1-standard 1 (2 CPUs and 3.75GB RAM) instance of GCE. At that time import was going smoothly and I was able to import at a pace of 250 entries per hour using WP All Import.
However, that time CPU utilization went to 60-70%, which create a huge impact on live visitors on my server so I upgraded to ns1-standard-2 (4CPUs and 7.5GB RAM) and then to 11GB RAM.
Slowly performance of the import has started decreasing. I modified values of max vars, memory, max execution time to practically Infinite but now after just 15k entries speed is 80 entries in an hour. I have to import 200k entries in my server.
I am also getting sudden spikes in CPU usage. I did not have such spikes in the beginning. Also error log doesn't have anything mentioned wrt import process.
Screenshot:
Any pointers?
I'd suggest you try looking at top, oprofile, or other tools to determine what is going on with the machine that is taking the time. top can also help you determine whether RAM or CPU is the issue, and can provide much more granularity than the graph you're showing from the GCP web console. (You could also try out Stackdriver in the Basic tier to get more detail on the resource utilization, which might help you figure out the spikes).
One note - you say you're using an n1-standard-1 with 2 CPUs and 3.75GB RAM, but that is not a combination we have. An n1-standard-1 would have 1 VCPU and 3.75, and an n1-standard 2 would have 2CPU-7.5GB.
An option to see if machine size is the limitation would be to power down the VM, change the size to something big like an n1-standard-32, restart, and see if it goes faster.
Another thing to investigate would be whether you are limited by disk performance. Note that our PD (boot disk) performance is related to the overall size of the disk. So if you created a very small disk, and if it is now getting full as you do more imports, it could be that you need to increase the size of the disk to get more performance.
I have uploaded a wordpress site on t1.micro but there is a problem with database connectivity , It was going down multiple times(6-7 times in a day) and every time I need to restart mysql/apache server. Then I updated my instance to t2.micro to enhance the memory size but the problem was still there. Then after reading lots of forums I again decided to increase the memory size so again I updated to t2.medium
Model vCPU CPU Credits / hour Mem (GiB) Storage
t2.medium 2 24 4 EBS-Only
On many forums related to wordpress and aws, it was given that it is because of low RAM and mysql error log can help you to find out the exact issue.But when I checked mysql error log there is noting to display.
Now my RAM size is very large but the problem of shutting down server/mysql is still there.
Is there anybody who can help me to get out of this. Thanks in advance.
I have a script which generates 3 different image sizes from a library of images and as you may guess, it takes a while to do its job-approximately 5 minutes for 400 images.
The default maximum execution time value of 30 seconds was not enough so I decided to change it in php.ini by setting
max_execution_time = 1800;, I checked the updated value in phpinfo() and it proved that the new time limit is 1800. Just to be sure that the error is not caused by mysl timeout either, I updated mysql.connect_timeout = 1800.
The problem is that my script is still timing out after 30 seconds when it should not be.
What I was thinking about setting
set_time_limit(1800)
at the beginning of every script involved in the process but this would require me to set it in processors, controllers and so on.
I was trying to search for some internal settings regarding script execution time but I have found none.
Does anybody have any ideas how to force script to run longer without timing out?
UPDATE
The error is 500
MODX has nothing to do with it. Change the setting in your PHP.ini: Check the docs here
Also, why are you slamming such a heavy script all at once?
Use getCount to get a total number of them, then place a foreach loop processing a fixed number of images, inside another foreach loop which has a sleep or wait to taper out the load.
My server would probably process 400 images with little effort under 30 seconds. You almost may want to look at memory_limit in your config. I utilize 256 MBs, but I also have a couple dozen cores on the server with massive amount on memory.
Long time reader, first time poster
I have a script which imports rows from a CSV file, processes in, then inserts into a MYSQL database, the CSV file itself has around 18,800 rows.
This script runs as part of a WordPress plugin installation, and appears to be very temperamental. Sometimes it will complete the entire script and load the page as normal, other times, lets say, 2/3s of the time, it will only import around 17.5k of the rows before silently terminating the script and reloading the page without any GET or POST vars,
I have tried everything I can think of to see why its doing it, but with no luck.
The server software is Apache installed on Linux,
The Server error log doesn't have any entries,
max execution time is set to 0,
PHP max input time is 1800,
PHP register long arrays is set to on,
The script is running as PHP 5.3.5CGI
The database is hosted on the same server
The max memory limit is 256M
The max post size is 7M
Is there anything I am missing that may be causing an error?
Any help would be appreciated, as I am totally stumped!
Thanks in advance!
EDIT:
If I use a CSV file of 15k rows instead of 18k it completes correctly, could it be a time issue?