Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
A server has a page that calls 10 different PHP files which in total up take 10 ms (1% of CPU) to execute and 1MB of memory.
If the website begins to get lots of traffic and this individual page request that calls these 10 PHP files takes 10 ms (1% of CPU) happens to gain 90 hits per second does the CPU percent increase? Or balances at 1%? Also does the memory increase?
What would the load (CPU and memory) look like at 100 hits? 1,000 hits? 10,000 hits? and 100,000 hits?
Keeping with the above specifications.
Also, if there were another 10 different pages, calling 5 unique PHP files and 5 of the same PHP files from the above call? What happens to load at 100 hits, 1,000 hits, 10,000 hits and 100,000 hits per second? Does it partially increase? Balance?
There isn't much information on heavy loading behavior for PHP online, so I'm asking to get a better understanding, of course. Thanks! :o)
Your question has a difficult answer and I cannot tell you the accurate ratio of the increase of server's resources. But, keep these two things in mind:
More the number of users, more the use of resources. So, it doesn't matter that you are calling the same files, but the thing which matter is that you are calling it 90 times.
Your system's usage would increase definitely, but one thing would make it a little less. And, that is caching. Your CPU would load these files into its cache (when they would be accessed very much) and hence, it would make the process a bit faster.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 22 days ago.
Improve this question
I am building an application in PHP that requests data from a third-party API, stores and processes that data and then submits additional API requests based on the data received from the first request.
The issue is that there are several rate limits and where there is a large volume of data to be requested, I need to make many paginated API requests in 2-second intervals so that I can avoid getting blocked by the rate limits. Essentially, the programme keeps looping through making APi requests every 2 seconds until there is no longer a next page URl in the response header.
Depending on the amount of data, it could take several minutes, up to several hours. I can increase the max execution time in PHP.ini, but this is not efficient and could still result in a timeout if one day the program has too much data to work with.
I'm sure there must be a better way to manage this, possibly with serverless functions, or some kind of queuing system to run in the background. I have never worked with serverless functions, so it will be a learning curve, but happy to learn if needed.
I would love to hear what anyone thinks the best solution is. I am building the application in PHP, but I can work with JS, or NodeJs if I need to.
Many thanks in advance.
You can use queue for that. There are plenty of packages, and you can choose one depends on your needs.
Also, you can use Asynchronous requests maybe from guzzle or some other vendors (which speedup reading process) and also you can easily implement delay retry middleware for rate limiter.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
We're using PHP 7 and have a MySQL DB running on a Webserver with only 128 MB RAM.
We have a problem with processing tons of datasets.
Simple description: We have 40.000 products and we want to collect data to these products to find out, if they need to be updated or not. The query which is collecting the specific data from another table with 10 Million datasets takes 1.2 seconds, because we have some SUM functions in it. We need to do the query for every product individually, because the time range which is relevant for the SUM, differs. Because of the mass of queries the function which should iterate over all the products returns a time out (after 5 min) - that's why we decided to implement a cronjob, which calls the function and the function continues with the product it ended the last time. We call the cronjob every 5 min.
But still, with our 40.000 products, it takes us ~30 hours until all the products were processed. Per cronjob, our function processes about 100 products...
How is it possible to deal with such a mass of data - is there a way to parallelize it with e.g. pthreads or does somebody have another idea? Could a server update be a solution?
Thanks a lot!
Nadine
Parallel processing will require resources as well, so on 128 MB it will not help.
Monitor your system to see where the bottleneck is. Most probably the memory since it is so low. Once you find the bottleneck resource, you will have to increase it. No amount of tuning and tinkering will solve an overloaded server issue.
If you can see that it is not a server resources issue (!), it could be at the query level (to many joints, need some indexes, ...). And your 5 min. timeout could be increased.
But start with the server.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'm currently building an API in Laravel (PHP) for retrieving prices from different hosts. There are currently 756 different 'coins' with different hosts to retrieve the prices from.
For example: Coin X
Host 1
Host 2
Host 3
-- up to 30 hosts
Coin Y
Host 1
Host 4
Host 5 -- up to 30 hosts
etc.
The problem here is, that ideally each coin should be updated every 10 seconds. This means that each coin needs to call all of its hosts, calculate the average price, save the price in the DB and finally save a JSON file with the total history of the coin. (Perhaps it would be better to also save the current price as JSON to save some time)
I've tried to putting all of this in a class for each host, but the execution time is way too long (around 5 minutes using CURL).
I'm thinking to create a task (cron job) for each coin. This way the 'updating' of the coins can go in sync (multiple coins at once). But i'm not quitte sure this would be the best way.
What approach would you guys recommend? All tips are welcome.
Your overall update time is going to be limited by number of coins x individual host update speed. You might want to time how long one (or avg) request takes (with script) to see just how long the whole thing will take. As, if it will likely take too long, doing them one after the other, you will have to set it up to do parallel (at same time) requests.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 6 years ago.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I have a PHP script (HTTP API Web service) which does INSERT and SELECT data from a MySQL database. It has 3 SELECT queries and 2 INSERT queries.
This P>HP script is called 10,000 times per second by other servers by HTTP GET to an URL like http://myserver.com/ws/script.php?colum1=XXX&column2=XXX
However, only 200 records are stored per second.
I use an Intel(R) Core(TM) Quad Core i7-3770 CPU # 3.40GHz, 32 GB RAM, Cent 0S with cpanel, 2TB SATA HDD.
How can I increase the amount of queries per second?
Here are some options to increase database performance
enable MySQL query caching if not already enabled (http://dev.mysql.com/doc/refman/5.7/en/query-cache-configuration.html)
add indexes on search columns
optimize queries (e.g. by avoiding deep sub-selects or complex query-conditions or checking framework/orm for unnecessary join logic)
changing engine (e.g. InnoDB to MyISAM)
using a more scaleable dbms (e.g. MariaDB instead of MySQL)
using one or more mirrors on additional hardware (slave databases for reading only) and using a load-balancer
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a long polling in php, is recommended use sleep(x seconds)? If i don't use, the PC will be slow? (Lagging, apache stop, etc). Have a difference?
sleep - of any duration, even "0 seconds" - is a quick way to get the Operating System's scheduler to 'pause' the current task and allow another process to continue work.
This context switch prevents visible 'lagging' because the other processes have a chance to do what they need to do. Even if there is no other process that needs to do work a sleep still causes the current process execution to halt until it is rescheduled. The rescheduling alone this greatly prevents making the CPU a toaster because the effective/relevant time the process is given to work is greatly reduced.
Without the sleep (or other blocking IO task) it becomes a 'hot busy loop'; this loop is executed as fast as it can be and, even though the process will eventually be be preempted without a sleep, the 'busy loop' will consume significantly more CPU resources before it is rescheduled. (This also implies that the same amount of work will take longer to complete when sleeping often.)
Thus: Sleep can be advantageous on selectively yielding work in a CPU-bound application; but at the same time it can reduce the CPU/processing throughput available if called too often or for too long. Sleeping in a loop has much less impact on an IO-bound application; in which case it's primary purpose is to impose longer delays before continuing a certain action.