Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 6 years ago.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I have a PHP script (HTTP API Web service) which does INSERT and SELECT data from a MySQL database. It has 3 SELECT queries and 2 INSERT queries.
This P>HP script is called 10,000 times per second by other servers by HTTP GET to an URL like http://myserver.com/ws/script.php?colum1=XXX&column2=XXX
However, only 200 records are stored per second.
I use an Intel(R) Core(TM) Quad Core i7-3770 CPU # 3.40GHz, 32 GB RAM, Cent 0S with cpanel, 2TB SATA HDD.
How can I increase the amount of queries per second?
Here are some options to increase database performance
enable MySQL query caching if not already enabled (http://dev.mysql.com/doc/refman/5.7/en/query-cache-configuration.html)
add indexes on search columns
optimize queries (e.g. by avoiding deep sub-selects or complex query-conditions or checking framework/orm for unnecessary join logic)
changing engine (e.g. InnoDB to MyISAM)
using a more scaleable dbms (e.g. MariaDB instead of MySQL)
using one or more mirrors on additional hardware (slave databases for reading only) and using a load-balancer
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a server with few websites there and its something like in the following picture.
I decided to add some more panel as a new website. Each website had its own structure but I did remove the unnecessary part so I'm just using one single database for all of the websites. Recently I had an issue with high CPU usage of MySQL. I'm not sure if it is because of using one single database or not.
In addition: Is there a way to get data with cronjob less than one minute? I tried sleep() but I guess its not a good idea.
Sharing one database amongst multiple applications has some serious disadvantages:
The more applications use the same database, the more likely it is that you hit performance bottlenecks and that you can't easily scale the load as desired.
Maintenance and development costs can increase: Development is harder if an application needs to use database structures that aren't suited for the task at hand but have to be used as they are already present. It's also likely that adjustments of one application will have side effects on other applications ("why is there such an unecessary trigger??!"/"We don't need that data anymore!"). It's already hard with one database for a single application, when the developers don't/can't know all the use-cases.
Administration becomes harder: Which object belongs to which application? Chaos rising. Where do I have to look for my data? Which user is allowed to interact with which objects? What can I grant whom?
Coming back to your issue on high resource usage, this is ideally caused by multiple applications utilizing the same database which increases needed CPU utilization. I strongly suggest maintaining every application with its own database for increased performance and scaling capabilities.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I just wondered what the fastest way to connect to MySQLi is? I have yet to find a stackoverflow post on this, if there is even a fastest way or if every way is faster, I really want to milk the speed for all it is with my application.
You don't need a HandlerSocket.
HandlerSocket is a MySQL plugin that implements a NoSQL protocol for MySQL.
Allows applications to communicate more directly with MySQL storage engines, without the overhead associated with using SQL.
From the docs:
Once HandlerSocket has been downloaded and installed on your system,
there are two steps required to enable it.
First, add the following lines to the [mysqld] section of your my.cnf
file:
loose_handlersocket_port = 9998
# the port number to bind to for read requests
loose_handlersocket_port_wr = 9999
# the port number to bind to for write requests
loose_handlersocket_threads = 16
# the number of worker threads for read requests
loose_handlersocket_threads_wr = 1
# the number of worker threads for write requests
open_files_limit = 65535
# to allow handlersocket to accept many concurrent
# connections, make open_files_limit as large as
# possible.
Second, log in to mysql as root, and execute the following query:
mysql> install plugin handlersocket soname 'handlersocket.so';
I agree #Your Common Sense that using a HandlerSocket is NOT even needed. Unless your a major corporation and every second counts.
The fastest, normal way to connect
$db = mysqli_connect("localhost","my_user","my_password","my_db");
$db->query("...");
How much speed do you actually need? If you need corporate size speed, I can help you with HandlerSocket. My guess is that you'll be just fine with standard MYSQLi connection.
Well, if you want a real fast connection with mysql database, you may consider a HandlerSocket solution.
Though I doubt you really need anything that is faster than just regular approach.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am planning to create a php myadmin inventory system for a large firm (at least 1000 branches)
There will be a centralized server and database kept in one place. where all the branches can insert and retrieve data from the centralized server. there will be at least 2000 sales bill and 100 purchase bills (at least 1 gb data from a branch)
My doubt is that is it technical feasible for me to use php and mysql (apache) for this project? the data will be vast? do i need to change the front end to jsp and back end to any other database?
I dont know much about php and mysql database....? any one who went through this scenario already could help me.
I suspect this question will not stay open for long, it is way too generic, but for what its worth: yes, it is feasible, I did a similar project before.
You would have to be careful with your data schema structure, and
would need to tune mysql server quite a bit, but this is true for
any database.
You also might want to employ local servers and replication to
central server.
Your reporting server should be separate, since its workload
should not affect main data performance.
These are some thoughts that come to mind.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
A server has a page that calls 10 different PHP files which in total up take 10 ms (1% of CPU) to execute and 1MB of memory.
If the website begins to get lots of traffic and this individual page request that calls these 10 PHP files takes 10 ms (1% of CPU) happens to gain 90 hits per second does the CPU percent increase? Or balances at 1%? Also does the memory increase?
What would the load (CPU and memory) look like at 100 hits? 1,000 hits? 10,000 hits? and 100,000 hits?
Keeping with the above specifications.
Also, if there were another 10 different pages, calling 5 unique PHP files and 5 of the same PHP files from the above call? What happens to load at 100 hits, 1,000 hits, 10,000 hits and 100,000 hits per second? Does it partially increase? Balance?
There isn't much information on heavy loading behavior for PHP online, so I'm asking to get a better understanding, of course. Thanks! :o)
Your question has a difficult answer and I cannot tell you the accurate ratio of the increase of server's resources. But, keep these two things in mind:
More the number of users, more the use of resources. So, it doesn't matter that you are calling the same files, but the thing which matter is that you are calling it 90 times.
Your system's usage would increase definitely, but one thing would make it a little less. And, that is caching. Your CPU would load these files into its cache (when they would be accessed very much) and hence, it would make the process a bit faster.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am creating a social networking site in PHP framework. I am thinking about points.
Server technology
Database and Indexing
Caching (memory and database and file)
Load balancing
Can somebody help me with points. What other points should I consider.
Have a prototype running
Write good maintainable code
Use common sense (like using PK and indexes in your DB of choice), don't join with views etc
Start with few users, see where the bottle necks are, and then decide/ask again.
Or, like the saying goes : "Premature optimization is evil" and here is someone who disagree
In addition to what has already been written I would like to add that if your website is using database then it is highly recommended to use database connection pooling rather than opening/closing connection in each script for a high volume site.