Connect to Oracle database on a different server from PHP - php

Hello I have a database engine sitting on a remote server, while my webserver is present locally. I have worked pretty much with client-server architecture, where the server has both the webserver and the database engine. Now I need to connect to an Oracle database which is situated on a different server.
Can anybody give me any suggestions?? I believe ODBC_CONNECT might not work. Do I use OCI8 drivers?? How would I connect to my database server.
Also I would have a very high number of database calls going back and forth, so is it good to go with persistent connection or do I still use individual database calls?

If you're using ODBC, then you need to use the PHP's ODBC driver rather than the OCI8 driver. Otherwise, you need the Oracle client installed on your webserver (even if it's just Oracle's Instant Client) and then you can use OCI8.
EDIT
Personally I wouldn't recommend persistent connections. While there is a slowdown when connecting to a database (especially a remote database), persistent connections can cause more issues if you have a high hit count (exceeding the number of persistent connections available), or if there's a network hiccup of any kind that leaves orphaned connections on the database, and potentially orphaned pconnectiosn as well.

Oracle client comes for each platform. In summary it is collection of needed files to talk to oracle and a command line utility for oracle. Just go to oracle.com and downloads

Related

Can heavy MySQL load cause PostgreSQL connection problems?

I am running PHP, PostgreSQL, and MySQL. Once a day, at the same time each day, I see the below error when PHP tries to connect to PostgreSQL:
pg_connect(): Unable to connect to PostgreSQL server: could not connect to server: Resource temporarily unavailable Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
At this point, PostgreSQL is barely loaded in terms of connections, from what I can see.
However, MySQL is unusually heavily loaded in terms of connections at exactly the same point in time.
Is it possible that the MySQL connection load is causing the issue in connecting to PostgreSQL?
This can happen for many scenarios, but you need to check something in your server, for example, if you have older versions of PostgreSQL in your OS removed all of them, also can be a configuration in your OS you need to make sure the limits in the processes and in the socket traffic. Check the answers in this link, there are some that can help you to identify the issue in your processes:
Access Postgres when I get an error about "/var/run/postgresql/.s.PGSQL.5432"?

running the same database both local and online in mysql

I am creating a software for my client in PHP and MySql, The software will be running in the local network, the software should also run in online that is all the data should be viewed online, i would like to know is there a way to update the mysql database from local to online whenever the internet is connected, if the internet is not connected all the data will be stored in local mysql server. they wont be adding any data in the online server and they will only view the contents. Kindly help me in this
It sounds like you are looking for a way to have a read-only, "online" (assuming WAN) MySQL server which is updated from a read-write, "offline" (LAN) MySQL Server which is updated by your users.
If that's the case, you may want to consider a Master/Slave MySQL Replication configuration:
http://www.rackspace.com/knowledge_center/article/mysql-replication-masterslave
you can run a cron job in your local server which monitor and upload data to online server when internet is available. You need to track (set a status) which data is uploaded which not for synchronization
Setup replication between your mysql databases. The local network database would be the slave and the web server would be the master.
See http://dev.mysql.com/doc/refman/5.5/en/replication-howto.html for setting up replication.
I would suggest you setup master/slave replication.
I would make the local server the master and the internet server the slave.
However, if you are using a common shared hosting service, I am not too sure if you have permission to setup such DB replication structure.
Tryhttps://www.digitalocean.com/community/articles/how-to-set-up-master-slave-replication-in-mysql. However, with the mysql versions pre 5.6 (I think) you can't delay the replication from a predefined interval. However, http://alexalexander.blogspot.com/2013/03/mysql-slave-delay-how-to.html suggests using something called a percona tool kit. I haven't used it though.

Results cache for Oracle 10g

I've read that Oracle 11g has a results cache feature and I could really benefit from it. However, my client has Oracle 10g. Is there any sensible way to emulate it in a web application powered by PHP/5.2 that connects to a remote Oracle 10g server via ODBC (with Oracle's driver, not Microsoft's).
The idea is to cache complex queries on large tables that normally return small data sets and make sure that cached data gets discarded when the underlying tables changes (it doesn't need to be immediate, a one hour delay is acceptable).
I can install new software on the web server (not the Oracle server) and I could probably switch to OCI8 if necessary.
You could look at materialized views in the database with stale tolerated.
memcached is an option.
But your client needs to upgrade to 11g since 10g support ends on 31-Jul-2011, they could purcase extended support until 31-Jul-2013. (this info could have changed)
You could use the In Memory Database Cache option of 11gR2. It also works for 10.2.0.4. This is a spin off from the TimesTen aquisition and you can use it to define a write through cache on your application servers. This allows for very fast returns. It scales wonderfully well, combine the app servers with the cache grid servers. In your case it could be fine to use mviews, if the data set to be scanned is large. If it is just complex, the cache will work fine, even for tables that are constantly modified.

Should i use mysql persistent connect?

The situation is: I have one Debian Server running LAMP with one Virtual Host with one Website. My MySQL has only one user from that website.
In this case would I benefit from using a persistent connection?
The PHP documentation seems to advise against persistent connections in any case.
Thanks
Edit: Yes, the MySQL server is on the same machine.
There's a discussion here http://groups.google.com/group/comp.databases.mysql/browse_thread/thread/4ae68befe1b488e7/e843f0b9e59ad710?#e843f0b9e59ad710 :
"No, it is not (better). Contrary, using mysql_pconnect() is considered harmful, as it tends to hog the MySQL server with idle connections."
If you connect via 'localhost', the connection will automatically be established via the MySQL socket, which is really cheap anyways.
(Groups link taken from MySQL Persistent Connections)
While you can get some performance benefits from using a persistent connection, but if the mysql server is on the same machine and you're not experiencing problems then it is probably not worth it. It is too easy to accidentally leave connections open, and the actual performance benefit is only going to be noticeable at high volumes.

Connecting to external MySQL DB from a web server not running MySQL

While I've been working with MySQL for years, this is the first time I've run across this very newbie-esq issue. Due to a client demand, I must host their website files (PHP) on a IIS server that is not running MySQL (instead, they are running MSSQL). However, I have developed the site using a MySQL database which is located on an external host (Rackspace Cloud). Obviously, my mysql_connect function is now bombing because MySQL is not running on localhost.
Question: Is it even possible to hit an external MySQL database if localhost is not running MySQL?
Apologies for the rookie question, and many thanks in advance.
* To clarify, I know how to connect to a remote MySQL server, but it is the fact that my IIS web server is not running ANY form of MySQL (neither server nor client) that is giving me trouble. Put another way, phpinfo() does not return anything about MySQL. *
Yes, you can use a MySQL database that's not on the same machine as Apache+PHP.
Basically, you'll connect from PHP to MySQL via a network connection -- TCP-based, I suppose ; which means :
MySQL must be configured to listen to, and accept connections on the network interface
Which means configuring MySQL to do that
And given the required privileges to your MySQL user, so he can connect from a remote server
And PHP must be able to connect to the server hosting MySQL.
Note, though, that habing MySQL on a server that's far away might not be great for performances : each SQL query will have to go through the network, and this could take some time...
If phpinfo is not returning anything about MySQL you need to install the MySQL plugin for PHP, easiest way to do that probably is to just upgrade PHP to the latest version. If not there is a .DLL file that you will need.
http://www.php.net/manual/en/mysql.installation.php
you will need to install the mysql extensions. this link should help: http://php.net/manual/en/install.windows.extensions.php
The MySQL server has nothing to do with PHP itself. What "mysql support" in PHP means is that it's been compiled with (or has a module loaded) that implements the MySQL client interface. For windows, it'd be 'mysql.dll', and on Unix-ish systems it'd be 'mysql.so'. Once those are loaded, then the various MySQL intefaces (mysql_xxx(), mysqli_xxx(), PDO, MDB2, etc...) will be able to access any MySQL server anywhere, as long as you have the proper connection string.

Categories