Fastest way to connect with MySQLi? [closed] - php

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I just wondered what the fastest way to connect to MySQLi is? I have yet to find a stackoverflow post on this, if there is even a fastest way or if every way is faster, I really want to milk the speed for all it is with my application.

You don't need a HandlerSocket.
HandlerSocket is a MySQL plugin that implements a NoSQL protocol for MySQL.
Allows applications to communicate more directly with MySQL storage engines, without the overhead associated with using SQL.
From the docs:
Once HandlerSocket has been downloaded and installed on your system,
there are two steps required to enable it.
First, add the following lines to the [mysqld] section of your my.cnf
file:
loose_handlersocket_port = 9998
# the port number to bind to for read requests
loose_handlersocket_port_wr = 9999
# the port number to bind to for write requests
loose_handlersocket_threads = 16
# the number of worker threads for read requests
loose_handlersocket_threads_wr = 1
# the number of worker threads for write requests
open_files_limit = 65535
# to allow handlersocket to accept many concurrent
# connections, make open_files_limit as large as
# possible.
Second, log in to mysql as root, and execute the following query:
mysql> install plugin handlersocket soname 'handlersocket.so';
I agree #Your Common Sense that using a HandlerSocket is NOT even needed. Unless your a major corporation and every second counts.
The fastest, normal way to connect
$db = mysqli_connect("localhost","my_user","my_password","my_db");
$db->query("...");
How much speed do you actually need? If you need corporate size speed, I can help you with HandlerSocket. My guess is that you'll be just fine with standard MYSQLi connection.

Well, if you want a real fast connection with mysql database, you may consider a HandlerSocket solution.
Though I doubt you really need anything that is faster than just regular approach.

Related

one database or multiply [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a server with few websites there and its something like in the following picture.
I decided to add some more panel as a new website. Each website had its own structure but I did remove the unnecessary part so I'm just using one single database for all of the websites. Recently I had an issue with high CPU usage of MySQL. I'm not sure if it is because of using one single database or not.
In addition: Is there a way to get data with cronjob less than one minute? I tried sleep() but I guess its not a good idea.
Sharing one database amongst multiple applications has some serious disadvantages:
The more applications use the same database, the more likely it is that you hit performance bottlenecks and that you can't easily scale the load as desired.
Maintenance and development costs can increase: Development is harder if an application needs to use database structures that aren't suited for the task at hand but have to be used as they are already present. It's also likely that adjustments of one application will have side effects on other applications ("why is there such an unecessary trigger??!"/"We don't need that data anymore!"). It's already hard with one database for a single application, when the developers don't/can't know all the use-cases.
Administration becomes harder: Which object belongs to which application? Chaos rising. Where do I have to look for my data? Which user is allowed to interact with which objects? What can I grant whom?
Coming back to your issue on high resource usage, this is ideally caused by multiple applications utilizing the same database which increases needed CPU utilization. I strongly suggest maintaining every application with its own database for increased performance and scaling capabilities.

Increase MySQL queries per second [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 6 years ago.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I have a PHP script (HTTP API Web service) which does INSERT and SELECT data from a MySQL database. It has 3 SELECT queries and 2 INSERT queries.
This P>HP script is called 10,000 times per second by other servers by HTTP GET to an URL like http://myserver.com/ws/script.php?colum1=XXX&column2=XXX
However, only 200 records are stored per second.
I use an Intel(R) Core(TM) Quad Core i7-3770 CPU # 3.40GHz, 32 GB RAM, Cent 0S with cpanel, 2TB SATA HDD.
How can I increase the amount of queries per second?
Here are some options to increase database performance
enable MySQL query caching if not already enabled (http://dev.mysql.com/doc/refman/5.7/en/query-cache-configuration.html)
add indexes on search columns
optimize queries (e.g. by avoiding deep sub-selects or complex query-conditions or checking framework/orm for unnecessary join logic)
changing engine (e.g. InnoDB to MyISAM)
using a more scaleable dbms (e.g. MariaDB instead of MySQL)
using one or more mirrors on additional hardware (slave databases for reading only) and using a load-balancer

Should a PDO script written for MySQL work with Oracle? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I understand that in general PDO scripts are cross compatible i.e. generally changing the connection string should work.
In the past I've spent hours searching online after changing a PDO script connection string from MySQL to SQLite as this isn't the case, some things don't work the same (I remember an issue with row counting or something).
So should changing from MySQL to Oracle be generally simple, or are there things to watch out for as in the SQLite case?
So should changing from MySQL to Oracle be generally simple, or are there things to watch out for as in the SQLite case?
There are things to watch out.
More seriously, beside basic SQL query, each RDBMS has its own set of specific features that have to be taken into account. Just to give one example, if you want to limit the result set to one row only, MySQL provides the LIMIT clause. But for Oracle up to 11g, you need a sub-query for that purpose.
If you really need cross-vendor support, you probably should take a look at some library providing database abstraction layer whose job is to allow you to write database-agnostic code. PDO isn't such a library. But Doctrine DAL, Zend_db and many other are.
It is now considered as off-topic to request suggestions for a tool here, but take a look at this old question if you need few pointers: Best PHP DAL (data abstraction layer) so far

PHP variable for all users on server [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
It is possible to define variable in PHP and call it for all users connected on server?
I need variable, or object for store informations in RAM of the server without using database or server file system.
Save the data to the variable in one computer, and call them back in another connected computer.
What is the best practice, is it possible?
Roughly - yes, it is possible.
In order to do that you need to have access to RAM which I haven't seen in PHP done directly, not sure if is possible or not, you can research this yourself.
What you can do is, however, since PHP uses memory to run, you can take advantage of that and create a php script that will run forever and act as a server, that is going to use it's ability to write and read memory and is going to be an amazingly simple job since PHP handles that for you automatically and you would not have to bother with addresses and stuff ( describing a simple variable declaration ). In order to access this running script you will need to examine how sockets work and how to establish a server-client connection. That is very well explained in this article.
However, I do not mean to be rude, but by the way you form your question I can make an assumption that this may be too much for you, so I guess what you can do is use MemcacheD or any other in-memory caching mechanism that is already built by people better at coding than me and you. There is plenty of information out there, just search for in-memory caching mechanisms.
Good luck!

Fastest geolocation (IP to town)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am writing small geolocation service: then user come to my site I should to set his town from his IP-address. Now I found three way to solve this problem:
Create from PHP connection to MySql DB and select town from it.
From PHP go to cgi script (perl,c ?) and select town from file with towns and IP-addrs.
Use services like http://ipinfodb.com/ip_location_api.php and get town from it.
But what way would be fastest? Minimal time etc?
Thanks!
3.
Primarily because of just how much data you'd have to manually compile together to do either 1 or 2.
There is no easy answer to it because a lot depends on unknown factors such as:
Speed of your MySQL DB
Speed of your php inplementation and size of the file
Speed of the location_api service
In other words, there are only two ways to find out the answer:
build them all and test
gather all parameters (speeds, bandwidth, concurrent users of all systems) and calculate/guesstimate.
I've used the MaxMind database for country-level lookup from PHP (there is example code for other languages). The downloadable database is in a binary format optimised for speed of reading - although I've not compared it to a import into Mysql and searching with SQL, I have no doubt of Maxmind when they say it would be faster to use the API and original data rather than via another means, like SQL.

Categories