I've read that Oracle 11g has a results cache feature and I could really benefit from it. However, my client has Oracle 10g. Is there any sensible way to emulate it in a web application powered by PHP/5.2 that connects to a remote Oracle 10g server via ODBC (with Oracle's driver, not Microsoft's).
The idea is to cache complex queries on large tables that normally return small data sets and make sure that cached data gets discarded when the underlying tables changes (it doesn't need to be immediate, a one hour delay is acceptable).
I can install new software on the web server (not the Oracle server) and I could probably switch to OCI8 if necessary.
You could look at materialized views in the database with stale tolerated.
memcached is an option.
But your client needs to upgrade to 11g since 10g support ends on 31-Jul-2011, they could purcase extended support until 31-Jul-2013. (this info could have changed)
You could use the In Memory Database Cache option of 11gR2. It also works for 10.2.0.4. This is a spin off from the TimesTen aquisition and you can use it to define a write through cache on your application servers. This allows for very fast returns. It scales wonderfully well, combine the app servers with the cache grid servers. In your case it could be fine to use mviews, if the data set to be scanned is large. If it is just complex, the cache will work fine, even for tables that are constantly modified.
Related
My plan is to store data in my local MySQL database and after a time interval it will update my live MySQL database remotely. Is it possible?
I'm planning a inventory management script in PHP MySQL. I will install web application locally to my clients and it will backup local data to live server via API or any library.
Can any one suggest me any library for this all.
Thanks is advance.
You can achieve this with different methods:
Database Replication (recommended solution), it almost handles everything perfectly. You can read different tutorials available on internet to use it.
Scheduled PHP script to sync data on your specified occasions. There are several packages available for this, i.e. https://github.com/mrjgreen/db-sync. For scheduling you can use CronTab or supervisor or etc.
But I would personally recommend you to use replication, since it is a native DBMS solution for such scenarios.
I have a system in which there are several copies of a MySQL DB schema running on remote servers (and it's not possible to consolidate them all into one DB schema in the cloud).
However, this has proven troublesome because whenever the master DB schema is updated, I have to then remotely log into all the other servers and manually update the schemas using the sync tool in MySQL Workbench, which honestly doesn't work very well (i.e., it doesn't catch changes to views, etc.).
As such, I would like to come up with a way to have the master DB schema stored somewhere in AWS and have all the other, remote instances do something like a daily check for anything that's different between the schema locally installed on that server and the master schema in AWS.
Are there tools out there for this sort of thing, and what are they called? Also, because the application itself is written in PHP, using a tool that's easy to use in PHP would be ideal.
Thank you.
Also, I should note that a lot of the remote schemas are stored on servers behind very secure firewalls, so I don't think that pushing the master DB schema to the remote instances will work. Instead, I think that the request for the schema update has to originate from each of the remote servers to the master schema on AWS, if that makes a difference at all.
Db-sync project for MySql sounds like the tool for you. Here is git repo: https://github.com/mrjgreen/db-sync
I have a website running with php and mysql on Centos(Amazon instance). The newly brought ERP(to be integrated with existing system) uses Oracle as db(located on a separate Windows server). The orders from the website are inserted into the Master Mysql database and which are replicated into the Slave Mysql database. These orders need to be pushed to the Oracle Db. I have arrived on 4 methods to do this.
Use mysql UDF for http communication that would send the rows on a Insert Trigger on the slave to the Oracle webservices on Oracle server
Use cron jobs(with a low interval may be 5 mins,polling) with Php script that would get the new orders from mysql and send to the Oracle db via Oracle services/Php services on Oracle hosted server.
Use sys_exec() udf to invoke php script to insert into Oracle db
Use memcached with MySql and let Php poll the memcached to retrieve data and send it to Oracle server, but unsure whether we could migrate existing Mysql version to new version Mysql 5.6
I already have the UDF's in place and tested them, they are good to go. but still in dilemma regarding data integrity and reliability in case of using UDF's with triggers.
Is there a better method for doing this. Or else which method shall I follow to do the same.
I am aware of the security threats of UDF's, you can't restrict it to any user
One more thing I am not allowed to introduce new changes to the existing website php code for the operation.
SymmetricDS replicates parts of or entire database schemas across different vendor products. It works by installing triggers, but it replicates on a transaction basis ensuring that transactions gets replayed onn target database (s) in the right order.
Hello I have a database engine sitting on a remote server, while my webserver is present locally. I have worked pretty much with client-server architecture, where the server has both the webserver and the database engine. Now I need to connect to an Oracle database which is situated on a different server.
Can anybody give me any suggestions?? I believe ODBC_CONNECT might not work. Do I use OCI8 drivers?? How would I connect to my database server.
Also I would have a very high number of database calls going back and forth, so is it good to go with persistent connection or do I still use individual database calls?
If you're using ODBC, then you need to use the PHP's ODBC driver rather than the OCI8 driver. Otherwise, you need the Oracle client installed on your webserver (even if it's just Oracle's Instant Client) and then you can use OCI8.
EDIT
Personally I wouldn't recommend persistent connections. While there is a slowdown when connecting to a database (especially a remote database), persistent connections can cause more issues if you have a high hit count (exceeding the number of persistent connections available), or if there's a network hiccup of any kind that leaves orphaned connections on the database, and potentially orphaned pconnectiosn as well.
Oracle client comes for each platform. In summary it is collection of needed files to talk to oracle and a command line utility for oracle. Just go to oracle.com and downloads
What would be a better choice for making a database driven
Adobe AIR(Desktop) application?
Using PHP+MySql with AIR
OR
Using SQLite
If I choose SQLite, then I cannot reuse my code for
an online application.
If I choose 1, I have to block few port numbers on User's machine.
Also, I am using XAMPP for providing the user with PHP and MySql, so
XAMPP opens up a command window as long as it's running. And, users
get confused about what's that window for? It makes the end user
experience slightly confusing.
I'd definitely use SQLite as its included in Air.
May I suggest; write your code in two sections. The UI which uses a JSON feed to populate itself, and the API to provide the JSON data. When it comes to port the application to the web you can use the same UI but with a rewritten API.
Whatever you do, don't open up a command window while the program is running. If you do that, your customers will uninstall like there's no tomorrow.
As far as mysql vs sqlite, the standard approach is - if it communicates remotely, feel free to use mysql, but if you're installing the db on the client, you should use an embedded standalone db (sqlite).
How complex do you expect your app to be that you can't use sqlite (besides not being able to reuse some of the code that you mentioned)?
If XAMPP is too confusing for your client, install Apache and MySQL as standalones. It's essentially the same thing and you'll have more control over what's running in Apache/MySQL. Plus you won't get an annoying command window (though, to be quite honest, I don't recall a window that I couldn't minimize to the tray when I ran XAMMP).
My suggestion is use Sqllite as your local database and writes a synchronization API that will synchronize the local sqllite database with the server side database-MySql. So according to your client you can use the system. If the client is standalone then Sqllite will serve otherwise the MySql will serve. Only thing you have to decide in both this is how to use the synchronization api.
Just check the Sample Application