I am using WPMU installation and trying to import listing in my site.
I started with ns1-standard 1 (2 CPUs and 3.75GB RAM) instance of GCE. At that time import was going smoothly and I was able to import at a pace of 250 entries per hour using WP All Import.
However, that time CPU utilization went to 60-70%, which create a huge impact on live visitors on my server so I upgraded to ns1-standard-2 (4CPUs and 7.5GB RAM) and then to 11GB RAM.
Slowly performance of the import has started decreasing. I modified values of max vars, memory, max execution time to practically Infinite but now after just 15k entries speed is 80 entries in an hour. I have to import 200k entries in my server.
I am also getting sudden spikes in CPU usage. I did not have such spikes in the beginning. Also error log doesn't have anything mentioned wrt import process.
Screenshot:
Any pointers?
I'd suggest you try looking at top, oprofile, or other tools to determine what is going on with the machine that is taking the time. top can also help you determine whether RAM or CPU is the issue, and can provide much more granularity than the graph you're showing from the GCP web console. (You could also try out Stackdriver in the Basic tier to get more detail on the resource utilization, which might help you figure out the spikes).
One note - you say you're using an n1-standard-1 with 2 CPUs and 3.75GB RAM, but that is not a combination we have. An n1-standard-1 would have 1 VCPU and 3.75, and an n1-standard 2 would have 2CPU-7.5GB.
An option to see if machine size is the limitation would be to power down the VM, change the size to something big like an n1-standard-32, restart, and see if it goes faster.
Another thing to investigate would be whether you are limited by disk performance. Note that our PD (boot disk) performance is related to the overall size of the disk. So if you created a very small disk, and if it is now getting full as you do more imports, it could be that you need to increase the size of the disk to get more performance.
Related
Current config:
16GO RAM, 4 cpu cores, Apache/2.2 using prefork module (which is set at 700 maxClients, since avg process size ~22MB), with suexec and suphp mods enabled (PHP 5.5).
Back-end of site using CakePHP2 and storing on a MySQL server. The site consists of text / some compressed images in the front and data processing in the back.
Current traffic:
~60000 unique visitors daily, on peaks I'm currently easily reaching 700+ simultaneous connections which fills the MaxClients. When I use apachectl status at those moments, I can see that then all the processes are used.
The CPU is fine. But the RAM is getting all used.
Potential traffic:
The traffic might grow to ~200000 unique visitors daily, or even more. It might also not. But if it happens, I want to be prepared. Since I've already reached the limits of the current server using that config.
So I think about taking a new server, much bigger, like with 192GB Ram and 20 cores for example.
I could keep exactly the same config (which means I would then be able to handle 10* my current traffic with that same config).
But I wonder if there is a better config in my case using less ressources and being as much efficient ? (and which is proved to be so)
Rate Per Second = RPS
Suggestions to consider for your my.cnf [mysqld] section,
thread_cache_size=100 # from 8 to reduce threads_created
innodb_io_capacity=500 # from 200 to allow higher IOPS to your HDD
read_rnd_buffer_size=128K # from 256K to reduce handler_read_rnd_next RPS of 129,942
thread_concurrency=6 # from 10 to expedite query completion with your 4 cores
slow_query_log=ON # from OFF to allow log research and proactive correction
These changes will contribute to less CPU BUSY.
Observations:
A) 5.5.54 is past End of Life, newer versions perform better.
B) These suggestions are just the beginning of possible improvements, even with 5.5.4.
C) You should be able to gracefully migrate to innodb_file_per_table once
you turn on the option. Your tables are already managed by the innodb engine.
For additional assistance including free downloads of Utility Scripts, view my profile, Network profile, please.
In PHP, shared hosting environment, what shall be an optimal memory consumption to load a page. My current PHP script is consuming 3,183,440 bytes of memory. What shall I consider a good memory usage, to entertain say, 10000 users parallely?
Please be detailed, as I am a novice in optimization part.
Thanks in advance
3MB isn't that bad - keep in mind that parts of PHP are shared, depending on which server is used (IIS, ngx, apache etc.) you can specify pools and clusters as well when having to scale up.
But the old adage testing is knowledge goes well here, try load tests on the site, concurrent 10 -> 100 -> 1000 connections and look at the performance metrics, it wil give you more insight on how much memory is required.
For comparison, the site I normally work on has an average of 300+ users concurrently online and the memory usage is just under 600MB, however I run certain processes locally it will easily use up 16MB.
I'm using my own PHP script with MySQL and when there are many users on the site I can see that CPU Load is somewhat high and RAM usage is low. For example, CPU Usage is about 45% and used RAM is 3GB out of 64GB.
How can I make it so it would use more RAM and less CPU? I'm using MyISAM as MySQL engine, php 7.0. I don't need an answer that explains step by step on how to do this, but I would appreciate any directions because I don't know how to get on with it.
I have a dedicated server using cPanel, WHM, Apache and I have full control over what is on the server.
One good way to use RAM to relieve CPU load is caching.
That is, if your app needs some data results that are pretty computationally expensive to produce, you should pre-compute them and store them in RAM, then the next time your app needs those results, they can be fetched from the cache, probably a lot more cheaply than recomputing them.
Popular tools for this is Memcached or Redis.
I am running some set of CRON Jobs(every hour) to extract latest data from one DB and writing into CSVs using PHP.
Recently I have faced something unusual in my EC2 server. I could see CSV generated with header only, but then there was data. Also all my logger to track the process shown data extracted and count of extracted records as well. Only issue I found was CPU utilization was 100% during this scenario. Later everything went fine once CPU utilization become normal.
Then after 4 days, this time CSV generated with data twice. That means only one header but then same set of data repeated twice in the CSV. My logger to track the process shown correct count this time as well. Again only issue found was CPU utilization climbed up to 100% during this period of time.
Is there any connection between EC2 CPU utilization and this process, may be any memory related? Or anyone faced similar issues, even in different cloud?
Please advice.
Thanks
If the jobs takes more than one hour (because of high CPU utilization for example), then there will be another instance of the job and likely you will get the duplicated results in the CSV file. So, you should prevent the CRON jobs from being executed if there is already a running one. More information can be found here and here.
Hi guys I have a question about server's RAM and PHP/MySQL/Jquery script.
Can scripts kills RAM when script doesn't take extra RAM? (I know it could happen when RAM grow up to maximum or because of memory limit. But it isn't this case.)
I'm testing script but everytime when I do that RAM goes quickly down.
Script doesn't show error for memory limit and it's correctly loading all data. When I don't test script RAM is still down.
In database is a couple records - maybe 350 records in 9 tables (the bigges tables has 147 records).
(I haven't any logs just simply (really simple) graph for running server.)
Thank for your time.
If you're not getting errors in your PHP error log about failing to allocate memory, and you're not seeing other problems with your server running out of RAM (such as extreme performance degradation due to memory pages being written to disk for demand paging) you probably don't need to really worry about it. Any use case where a web server uses up that much memory in a single request is going to be pretty rare.
As for trying to profile the actual memory usage, trying to profile it by watching something like the task manager is going to be pretty unreliable. Most PHP scripts are going to complete in milliseconds, which isn't enough time for the memory allocations to really even register in the task manager.
Even if you have a more reliable method of profiling the memory usage (I don't recall if PHP has built in functions for this, but probably does), bear in mind that memory usage is going to flucuate tremendously for reasons that may be hard to understand. PHP in particular is very high level: you can open a database connection, which involves everything down to the OS opening network sockets, creating internal datastructures, caching things, and much more all in a single line of code. The script may allocate many megabytes of memory for such a thing for a single database row, but may then deallocate it a millisecond later.
Those database sizes are pretty neglibible. Depending on the row sizes it's possibly under a megabyte of data which is a tiny drop in a bucket for memory on anything remotely modern. Don't worry about memory usage for something like that. Only if you see your scripts failing and your error log reports running out of memory should you really worry about it.