Execution time not reducing in higher computer configuration PHP - php

I have a corpus consists of over 100 million Unicode words and the file size is around 2GB. I wrote a code to count the frequency of each word in the corpus in PHP. The code is running in Mozilla Firefox Browser with XAMPP local server. The code takes 90MB of texts each time and counts frequency and then take the next 90MB(since it was not taking the whole file at a time). When I ran the code in a PC with 6GB RAM and Core 2 Duo processor, it took 2 days to complete 20% of the work. Then, with another PC with 8GB RAM and Core i5 processor it took the same time. Finally, I used a server PC with 32GB RAM and Xeon Silver 4114 Processor it is taking almost the same amount of time. This makes me realize something is restricting using resources from PC. I don't get it. Is there any speed limitation on browsers or local server? Please help.

Related

Can WAMPServer running on windows 10 handle high traffic?

I am working on an IoT project in which several sensors will send data to my Wamp server through internet and I will log that data in my database.
My question is, can Apache + mySql handle data of these dimension.
There are nearly 800 data coming from sensor over different URL to my server.
Those data needs to be inserted in different table of database.
Now these 800 data comes with frequency of about 5 sec. Data will come 24*7. So on average I will need to fire 800-900 queries every 5 sec.
Would wamp and sql be sufficient to handle these density of data? If not what other server should I use? Or would I need to install some server OS instead of windows?
My PC specs - intel core i7, 16gb ram, 2gb nvidia 1080 graphics
NO!
But that is not WAMPServers fault.
Windows 10 itself can only handle about 20 remove connections, as Windows itself uses some of the max 30 that are allowed.
And before you ask, no, this is not configurable, otherwise noone would buy Windows Server.
However, if you want to use WAMPServer for testing (cause thats what it and XAMPP et al were designed for) then you should not have to many problems. It will cope with the volume of traffic just not 100's of connections at the same time

Wordpress Performance- Google Compute Engine

I am using WPMU installation and trying to import listing in my site.
I started with ns1-standard 1 (2 CPUs and 3.75GB RAM) instance of GCE. At that time import was going smoothly and I was able to import at a pace of 250 entries per hour using WP All Import.
However, that time CPU utilization went to 60-70%, which create a huge impact on live visitors on my server so I upgraded to ns1-standard-2 (4CPUs and 7.5GB RAM) and then to 11GB RAM.
Slowly performance of the import has started decreasing. I modified values of max vars, memory, max execution time to practically Infinite but now after just 15k entries speed is 80 entries in an hour. I have to import 200k entries in my server.
I am also getting sudden spikes in CPU usage. I did not have such spikes in the beginning. Also error log doesn't have anything mentioned wrt import process.
Screenshot:
Any pointers?
I'd suggest you try looking at top, oprofile, or other tools to determine what is going on with the machine that is taking the time. top can also help you determine whether RAM or CPU is the issue, and can provide much more granularity than the graph you're showing from the GCP web console. (You could also try out Stackdriver in the Basic tier to get more detail on the resource utilization, which might help you figure out the spikes).
One note - you say you're using an n1-standard-1 with 2 CPUs and 3.75GB RAM, but that is not a combination we have. An n1-standard-1 would have 1 VCPU and 3.75, and an n1-standard 2 would have 2CPU-7.5GB.
An option to see if machine size is the limitation would be to power down the VM, change the size to something big like an n1-standard-32, restart, and see if it goes faster.
Another thing to investigate would be whether you are limited by disk performance. Note that our PD (boot disk) performance is related to the overall size of the disk. So if you created a very small disk, and if it is now getting full as you do more imports, it could be that you need to increase the size of the disk to get more performance.

Sugar CRM - Performance Issues

I am trying to identify why my Sugar CRM sites are loading so slowly. I am hosting 22 sites on IIS, PHP version is 5.3.26, and my databases are on a seperate SQL Server 2008. The Web Server is running Windows 2008, IIS7, 10GB memory and has a Intel® Xeon® Processor E7-2870.
After profiling one of my databaes I have ruled out the issue to be data related as queries were consistently running in less than 1 second.
My hosting provider has confirmed that we have 100mb dedicated line to our web server. After running a speed test, I get around 70 mb down and 40 mb up, so I do not think this is a bandwidth issue.
I took a look at an offical 'Performance Tweaks' article, and made changes to the config_override.php as suggested, however this did not make a signifcant difference.
http://support.sugarcrm.com/04_Find_Answers/02KB/02Administration/100Troubleshooting/Common_Performance_Tweaks/
Something I have noticed is that there is an awful lot of PHP-CGI.EXE proccesses. As I look at the server now the average CPU consumption for one of these instances is 11%. I am not sure if this is something to be concerned about? The CPU Usage in Windows Task Manager looks very unstable.
To see if it was genernal PHP issue, I added a simple PHP script containing "echo (5 % 3)."\n";" - which was returned instantly.
To summarise web pages are taking on average 5 seconds to load and users are reporting the system as usable but slugglish.
Does anyone have any suggestions of how I might be able to speed up my application?
Thanks
What does the page load time show at the bottom of the page in SugarCRM? If it shows something like 0-2s, but the page in reality takes much longer then look at adding Opcode caching such as APC or Memcache.

Wordpress Website too much wait Time with Pingdom Tool

I'm working on a wordpress website:
- it's hosted on a VMWARE Linux Virtual Server with 2 core and 4GB RAM.
- it's the only website (development server) so no others website access.
- has Apache Module mod_deflate on text, html, javascript, css, xml
- it runs a lot of javascript stuff and the total size of the page is about 1,6 MB
- average cpu load is very low (0% to 5%)
- the server has 1GB RAM Free
- my ISP verified SAN access statistics and latency times are very low (some ms)
This is a load time test on Pingdom Website Speed Test:
http://tools.pingdom.com/fpt/#!/dMWeVi/http://www.watcheswholesale.eu/
it shows 3,9 seconds of wait time.
Is there a "check list" to understand why the server lose these seconds before sending content to the browser?
Thanks
I did a profiling an a wordpress installation once, being embarrassed by such a loading time.
It turned out that the time can be reduced by half with some opcode cache like APC, and another half been taken by parsing an enormous .po localization file. Did a quick patch to cache it in a php array and finally got loading time within a second (which is still too much but barely bearable).
Now I am thinking that removing useless languages from that gettext file would also help.
The profiling itself was as silly as adding microtime(1)-based labels all ove the code
Your fundamental problem is the site is taking too long to generate the page - I'd start by looking at how many DB calls are being made and how long they are taking - the query logs can help you with this.
You also need to turn on keep-alive so that you are reusing TCP connections but that'll only make a little bit of difference.

Understanding memory and cpu speed

Firstly, I am working on a windows xp 64 machine with 4gb ram and 2.29 ghz x4
I am indexing 220,000 lines of text that are more or less the same length. These are divided into 15 equally sized files. File 1/15 takes 1 minute to index. As the script indexes more files, it seems to take much longer with file 15/15 taking 40 minutes.
My understanding is that the more I put in memory, the faster the script is. The dictionary is indexed in a hash, so fetch operations should be O(1). I am not sure where the script would be hanging the CPU.
I have the script here.
You can try to monitor your machine to see if you're running out of memory. If so, you may want to look for memory leaks in your code.

Categories