I created a temporary table in one php file, and want to access it in another php file. the scripts run sequentially. I used mysqli and am prepending p: to hostname.
The problem is in my second php file, I cant access my temporary table. So I wanted to know if its possible to do this, or not? And if yes how? Am using WAMP server.
Not possible, directly. Temporary tables are destroyed when the connection used to establish them is closed. When your "create" script shuts down, its DB connection is closed, and mysql cleans - including destroying that temp table.
That means when your "use" script fires up, it gets a new connection, without any of the stuff that the first script did.
There are persistent connections available in PHP, but those connections exist in a pool, and there is no control over WHICH connection any particular scripts gets from that pool. You may get lucky and receive the same connection for two different scripts, but it's purely by chance.
You'd need some OTHER 3rd script that operates continously to hold open the mysql connection, leaving the temp table in place. And your other two scripts would communicate with this third one.
From http://php.net/manual/en/mysqli.persistconns.php
The persistent connection of the mysqli extension however provides built-in cleanup handling code. The cleanup carried out by mysqli includes:
(worth reading the other things too but the important bit is)
Close and drop temporary tables
In short, a temporary table is just that, temporary. It's not meant to be used for other purposes than to temporarily store some data for one specific operation. If you want a more permanent thing consider using a concrete table with a memory storage engine.
Related
This question has a wide area set, e.g. web servers, database servers, php application, etc and hence I doubt it belongs refers to stackoverflow, however since this question will help us on how to write the application code, I have decided to ask it here.
I have a confusion on how database sessions and web servers work together. If I am right, when a connection is made for a client, ONLY one session will be created for that connection, and that will last till the time either the connection is disconnected or it is reconnected due to long inactivity.
Now if we consider a web server, Apache 2.4 in particular running a PHP 7.2 application (in Virtual Host) with a database backed by MariaDB 10.3.10 (on Fedora 28 if that matters at all), I assume the following scenario (please correct me if I am wrong):
For each web application, right now we use Laravel, only one database connection will be made as soon as the first query is hit to the URLs it serves.
Subsequently, it will only have ONE database session for that. When the query is served, the connections is kept alive to be reused by other queries the application receives. That means most likely if the application receives web requests 24 x 7 continuously, the connection will be also kept alive and only will be disconnected when we restart either mysqld or httpd, that might not even happen in months.
Since all the users of the application, let us say something like 20 users at time, uses the same Apache servers and Laravel application files (I assume I can call that an application instance) all the 20 users will be served through the same database connection and database session.
If all the use cases mentioned above is right, then the concept of database locking seems very confusing. Since let's say we would issue an exclusive lock, e.g. lock tables t1 write;, it will block the reads and writes of the other sessions, to avoid dirty read and write operations for concurrent sessions. However, since all the 20 users uses the same session and connection concurrently, we will not get the required concurrency safety out of database locking mechanism.
Questions:
How database locking, explicitly exclusive lock work in terms of web applications?
Will each web request received at Laravel application create a new connection and session, or ONLY one connection and session is reused?
Will each database connection have only and only ONE session at a time?
Will this command shows the current active sessions or connections show status wherevariable_name= 'Threads_connected'? If it shows the current active connections, how we can get the current active database sessions?
Apache has nothing to do with sessions in this scenario (mostly). Database connections and sessions are handled by php itself.
Unless you have connection pooling enabled, database sessions will not be reused, each request will open its own connection and close it at the end.
With connection pooling enabled the thread serving the request will ask for a connection from the pool to the process manager (be it fpm or mod_php) and it will return an available connection from the pool, but there will still be at least as many sessions as concurrent requests (unless you hit any of the max_ limits). The general reference goes into more details, but as a highlight:
Persistent connections do not give you an ability to open 'user
sessions' on the same link, they do not give you an ability to build
up a transaction efficiently, and they don't do a whole lot of other
things. In fact, to be extremely clear about the subject, persistent
connections don't give you any functionality that wasn't possible with
their non-persistent brothers.
Even having a connection pool available, the manager must run some cleanup operations before returning the conection to the client. One of those operations is table unlocking.
You can refer to the connections reference and persistent connections reference of the mysqli extension for more information.
However, the mode of operation you are describing where multiple client sessions share a connection is possible (and experimental) and has more drawbacks. It's known as session multiplexing.
I know that PHP automatically closes open MySQL connections at the end of the script, and the only viable way to open a persistent connection is to use the appropriate PHP function; many questions have been asked and answered.
What I would like to know is the benefit - or the inconvenient - of keeping a temporary connection instead of a persistent connection
EDIT :
A persistent connection for each PHP user session.
For instance, statements such as the following:
session_start();
$connection = new mysqli($host, $user, $pass, $db);
$_SESSION['connection'] = $connection;
might set up a reference to a mysqli object useful for performing multiple queries across navigation in the website within the same session by the same user.
If the connection is supposed to be used soon after its activation, would not be the right choice to leave it just open for further queries? Maybe this approach would generate an inconvenient situation (and possible security risks) when multiple users are HTTP-requesting pages from a web site which keeps MySQL connections alive?
I would like to know more. Thank you.
There's overhead to connecting to MySQL or any database, although it isn't typically large. This overhead can be greater when the MySQL service is running on a different server, or depending on the authentication method and init commands needed.
More, MySQL connections may have associated caches. So reusing a connection may be able to reuse these caches.
But saving a resource in the session doesn't work. Session data is serialized, and stored in e.g. a file between requests. That's why the persistent connect methods have to be used.
The reason is that the connection is ultimately a resource, or a socket connection on an internal class, and this can't be "saved" without special handling. Try it, and you'll see that you get an error (with PDO and mysqli.)
mysqli::query(): Couldn't fetch mysqli
I don't think there's any way to get session-specific connection reuse without writing an extension to implement it. Theoretically it's possible, though, and you could theoretically implement a method to pull a connection from the pool by session id.
There are a lot of potential drawbacks and risks to persistent db connections, though:
If your application creates TEMPORARY TABLEs, those tables will still exist on the next run, since it's the same MySQL session.
If you set any variables (SET SESSION or etc.), they will be retained.
Depending on how transactions are handled, it's theoretically possible a persistent connection may have an in-progress transaction.
If the application issues any LOCK commands or uses user-locks, the locks could be held between requests and unknowingly held in new requests.
On a multi-tenant (shared) webserver, the risk of another process somehow obtaining database access it shouldn't have is higher.
This may leave large numbers of connections open for long periods of time, increasing resource usage on the database server.
You still have to expect that the connection might be lost between requests.
I have been asked to re-develop an old php web app which currently uses mysql_query functions to access a replicated database (4 slaves, 1 master).
Part of this redevelopment will move some of the database into a mysql-cluster. I usually use PDO to access databases these days and I am trying to find out whether or not PDO will play nicely with a cluster, but I can't find much useful information on the web.
Does anyone have any experience with this? I have never worked with a cluster before ...
I've done this a couple different ways with different levels of success. The short answer is that your PDO connections should work fine. The options, as I see them, are as follows:
If you are using replication, then either write a class that handles connections to various servers or use a proxy. The proxy may be a hardware or a software. MySQL Proxy (http://docs.oracle.com/cd/E17952_01/refman-5.5-en/mysql-proxy.html) is the software load balancer I used to use and for the most part it did the trick. It automatically routes traffic between your readers and writers, and handles failover like a champ. Every now and then we'd write a query that would throw it off and have to tweak things, but that was years ago. It may be in better shape now.
Another option is to use a standard load balancer and create two connections - one for the writer and the other for the readers. Your app can decide which connection to use based on the function it's trying to perform.
Finally, you could consider using the max db cluster available from MySQL. In this setup, the MySQL servers are all readers AND writers. You only need one connection, but you'll need a load balancer to route all of the traffic. Max db cluster can be tricky if the indexes become out of sync, so tread lightly if you go with this option.
Clarification: When I refer to connections what I mean is an address and port to connect to MySQL on - not to be confused with concurrent connections running on the same port.
Good luck!
Have you considered hiding the cluster behind a hardware or software load balancer (e.g. HAProxy)? This way, the client code doesn't need to deal with the cluster at all, it sees the cluster as just one virtual server.
You still need to distinguish applications that write from those that read. In our system, we put the slave servers behind the load balancer, and read-only applications use this cluster, while writing applications access the master server directly. We don't try to make this happen automatically; applications that need to update the database simply use a different server hostname and username.
Write a wrapper class for the DB that has your connect and query functions in it...
The query function needs to look at the very first word to detect if it's a SELECT and use the slave DB connection, anything else (INSERT, UPDATE, RENAME, CREATE etc...) needs to go the MASTER server.
The connect() function would look at the array of slaves and pick a random one to use.
You should only connect to the master slave when you need to do an update (Most webpages shouldn't be updating the DB, only reading data... make sure you don't waste time connecting to the MASTER db when you won't use it)
You can also use a static variable in your class to hold your DB connections, that way connections are shared between instances of your DB class (i.e. you only have to open the DB connection once instead of everytime you call '$db = new DB()')
Abstracting the database functions into a class like this also makes it easier to debug or add features
When I load a php page on my browser, it connects to the DB and runs some sql... let's say I now follow a link and it takes me to another page within the same website. What happened server wise? Did my first connection to the DB close, and re-opened again? Is that what happens in most cases?
Its very likely that the connection to your database is closed once the page has been processed by PHP, obviously then the result of the PHP is sent to the browser and viewed by the user.
Assuming you're running MySQL the only reason this wouldn't be the case is if the PHP script uses mysql_pconnect, where the connection will be kept open. However its usually good practice not use use this unless the MySQL server and the PHP server have a low bandwidth connection that's unused by other processes.
Yes, in most cases your database connection will close and re-open. In particular if the PHP interpreter is restarted for each page then it has no choice but to do this.
I believe the typical exception (although I've never used this myself) is where you're using something like mod_php.so (for Apache) and you arrange for a DB connection object to be stored as part of the user's session state. I don't believe that's recommended practise, though.
See http://php.net/manual/en/features.persistent-connections.php for more.
That's the usual case yes. But if you're talking about MySQL, you can use mysql_pconnect to keep persistent connections.
It depends on how the PHP was developed. If it was coded to close after each transaction, then yes it will be re-opened each time you view a page.
There is also the concept of a database connection pool. When a connection is used it is not closed but placed into a "pool" of connections waiting to be used again. Once a specified amount of time is passed without the connection being used it is then closed to save resources.
Pooling connections saves on processing time having to reopen a connection on each page reload.
Are there any tricks to transferring and/or sharing the Resource Links across multiple PHP pages? I do not want the server to continue connecting to the database just to obtain another link to the same user in the same database.
Remember that the link returned from mysql_connect() is automatically closed after the script it originated on completes executing. Is there anyway to close it manually at the end of each session instead?
PHP allows persistent mysql connections, but there are drawbacks. Most importantly, idle apache children end up sitting around, holding idle database connections open. Database connections take up a decent amount of memory, so you really only want them open when they're actually being used.
If your user opens one page every minute, it's far better to have the database connection closed for the 59 seconds out of every minute you're not using it, and re-open it when needed, than to hold it open continually.
Instead, you should probably look into connection pooling.
One of the advantages of Mysql over other heavier-weight database servers is that connections are cheap and quick to setup.
If you are concerned about large numbers of connections to retrieve the save information, you may like to look at such things as caching of information instead, or as well as getting the information from disk. As usual, profiling of the number, and type of calls to SQL that are being made, will tell you are great deal more than anyone here guessing at what you should really be doing next.