Usage of Mysql in offline internet state - php

I'm using a self-made customer system in PHP running with a local mySQL Database.
Now i have a second computer on a different location which has to use this Database too. So i gave this mysql Database on a Server reachable through internet.
My problem is now, that the first one has often problems with the internet connection and then the program will not work. But it has to work every time!
Now i do not know how i should handle this problem?
A local Database and one in the internet, but how should i merge them?
Should i make a local DB per computer and match them together in one?
I also want to change the framework behind this system to symfony2 so is there a way to solve this problem with this framework (e.g. doctrine?)
Thanks for your help!
Update:
My limitation is the Internet connection on the first computer which could not be eliminated.

If you really have limitations of (1) not being able to move the database off of the machine with a bad connection and (2) not being able to fix the bad connection; you are going to have to keep some sort of local instance on the second machine.
I would try to setup master-master replication from the first machine with the bad connection to the second machine. I'm not sure how reliable this will be considering the replication will be failing often due to the first machine's bad connection. This problem may be extrapolated if one or both machines are using old versions of MySQL. MySQL 5.5, for example, can be configured to actively monitor replication connectivity.
If the majority of your application does READS instead of WRITES, perhaps you could install Memcached (or something similar) on the second machine so that the application can pull data from local memory without requiring a connection to the MySQL server.

There are a few ways to achieve what you want (although maybe not exactly how you described), but the best way is definitely do host the database on a server that doesn't have Internet connectivity problems. Look for hosting that allows remote MySQL connections.

Related

How to set up a website with a PostgreSQL database?

I'm doing a group project and we're creating an online game. We're about half way done and now it's time to implement a database to store our records/data and make the website go live on the internet.
I'm just confused on how PSQL works exactly. My understanding is that PSQL needs to be running on some server in order to access it. For previous assignments, I downloaded Postgres for my Mac and ran it on localhost. The PHP code was something along the lines of:
$dbconn = pg_connect("host=localhost port=5432 dbname=mydbname");
So, if we intend to use PSQL, where would the server be? Do one of us have to host the server? Can we use some sort of free online server? How do we connect to that server with PHP?
In summary, I have two main questions:
How do we make our code go live on the internet for free? (It's just a temporary website and will only be up for a few weeks at most)
How can we all access a shared PSQL database?
Sorry for the noob questions, I just got started with web development and am still learning.
So, if we intend to use PSQL, where would the server be? Do one of us have to host the server? Can we use some sort of free online server? How do we connect to that server with PHP?
PostGreSQL is going to have to run on some machine visible to anyone who needs to access it. If only your web server (i.e., the machine running PHP and your website) needs to talk to the PGSQL, then PGSQL can be installed on your web server. This is a very common configuration.
The server might also run on the LAN where your web server is running or it might be running on an entirely different network on a different continent. The most important thing is that any machine which must connect directly to the database can actually connect to it. If you're building a website, this means you have a web server. Your web server will need to connect to the PGSQL server. The second most important thing is that your web server and the PGSQL server should share a very fast connection for the sake of performance and efficiency.
It's probably most common for your web server to also host the database. On an ubuntu machine, installing a PostGreSQL server is as easy as running a few commands. A quick search yields many examples like this one.
How do we make our code go live on the internet for free? (It's just a temporary website and will only be up for a few weeks at most)
I don't know anyone who is in the habit of offering free web hosting or DBMS services. You could ask a friend. Or put an ad on craigslist or something. Or if you are tech-savvy (it doesn't sound like you are) then you could configure a high-end router at your home to use Dynamic DNS to point some domain at a machine running at your house.
How can we all access a shared PSQL database?
I have no experience with Heroku, but you might sniff around there. PostGreSQL's website also maintains a list of hosting companies. Amazon offers RDS instances running PGSQL. Digital Ocean has a variety of tutorials and how-tos on dealing with PostGres. You could probably fire up a 'droplet' server for super cheap and install it yourself without too much effort.
Amazon offer a free tier database solution for Postgres. Something like 300 hours (don't quote me on it) for a low level set up.
They have tutorials on doing this here:
https://aws.amazon.com/rds/?nc2=h_m1
Once set up you get the end point and your connection string becomes something like
db_connect ("host=[URLENDPOING] user=postgres dbname=postres")

Can a computer run a localhost (MAMP etc) as a backup for if internet is not available - and update server database?

I have been looking into creating a custom mySQL Point Of Sales system so that there is one centralised database for inventory levels between multiple stores and online etc.
The biggest problem I see is the unlikely event that the internet drops out in the bricks and mortar stores. If this were to happen, could it be set up so that the POS system is running off a local mySQL database on that computer (using MAMP or something similar) and then once internet is available again, automatically sync the databases to update sales and inventory levels?
In regards to 'how is the actual POS system going to be accessed without internet' I'm was thinking that the POS system would be run on the server when internet is available, and then when the net drops out it would be run from files stores on the machine pointing to the local database on the machine.
Yes, a minimal & viable solution would just be to have all of the POS data entered locally as well as on the remote database, then it serves as a sort of backup in case anything happens to the central DB.
As far as automating the 'fix' of the central DB after an outage, maybe the best way is to have the central system request sales data from the local DBs of each store. If the workflow is setup like this, then you don't really have to do anything 'special' about internet outages.
The problem here is obviously writes. You can use replication to always have a local readable copy of the database, but it's tricky to have multiple masters when using replication. I haven't used MySQL Cluster, but it may be what you need.
But since the problem is writes you can possibly implement the writing part of the POS system as a service you send messages to. When the network is down, queue the messages and send them when online.
An easier solution may actually be to always ensure network stability. Set up some mobile (GSM/3G) connection for failover and possibly even a standard POTS telephone line as well.
MySQL Master/Slave Replication would seem the logical approach.
Your MAMP code works directly against the local (slave) database; and when the MAMP server has access to the master database, the databases stay synchronised. If the slave loses access to the Internet, it's still working locally; and when the internet connection is restored it will synchronise again with the master.
With a little care in the database design (particularly autoincrement pks), you can run multiple local/slave servers all with their own local store data, and the master is a repository of data across all stores.

The right way? Automated Synchronization between 2 mysql DB

I already read a few threads here and I also went through the MySQL Replication Documentation (incl. Cluster Replication), but I think it's not what I really want and propably too complicated, too.
I got a local and a remote DB that might get both accessed/manipulated by 2 different persons at the same time. What I'd like to do is to sync them as soon as possible (means instantly or as soon the local machine goes online). Both DB's only get manipulated from my own PHP Scripts.
My approach is the following:
If local machine is online:
Let my PHP Script on the loal machine always send the SQL Query to the remote DB too
Let my PHP Script on the remote machine always store its queries and...
...let the local machine ask the remote DB every x minutes for new queries and apply them locally.
If local machine is offline:
Do step 2. also for both machines and send the stored queries to the remote DB as soon as
local machine goes online again. Also pull the queries from the remote machine for sure.
My questions are:
Did I just misunderstand Replication or am I right that my way would be easier in my case? Or is there any other good solution for what I'm trying to accomplish?
Any idea how I could check whether my local machine is online/offline? I guess I'd have to use JavaScript, but I don't know how. The browser/my script would always be running on the local machine.
What you're describing is master-master or multi-master replication. There are plenty of tutorials on how to set this up across the web. Definitely do your research before putting a solution like this into production as replication in MySQL isn't exactly elegant -- you need to know how to recover if (when?) something goes wrong.

PHP/MySql clusters

I am currently planing a web application and I want to plan it to eventually run on a cluster later.
The cluster would be made of a php web cluster and a mysql cluster and a standalone storage unit (maybe a cluster of it I really don't know how that works :s)
I want to know if the code will be different than when php and mysql are on the same machine and what would be different?
The fact that the web and database servers are on different physical machines wouldn't change your code at all. The only place you'd need to change code is where you connect to the database - replacing the localhost reference with the IP address or hostname of the database server.
A clustered web server may need a different approach for storing sessions. If you got multiple webservers behind a load balancer, consequitive requests from the same session may end up on different servers. You should store the session data in a different place, like a central memcache.
Apart from a few of those issues, you should be fine regarding the web server.
As far as I know, MySQL and clustering are no friends. Although I wasn't really involved in the process, I know there has been a lot of trouble to get two database servers run together in our environment and even now they are not really clustered. They syncronize, but only one is actively used while the other is a fallback server.

Connecting to SQL Server very slow

I have a standard php app that uses SQL Server as the back-end database. There is a serious delay in response for each page I access. This is my development server, so its not an issue with the live setup, but it is really annoying for working on the system.
I have a 5 - 8 second delay on each page.
I am running SqlServer 2000 Developer Edition on a Virtual Machine (Virtual PC).
I have installed SqlServer on my development machine but get the same delay.
I have isolated the issue to the call to mssql_connect (calling mssql_pconnect has no effect)
It is a networking issue on how I have set up (or not set up, since I didn't really change default config) SQL server. It's not a strictly a programming issue but I thought I might get some valuable feedback here.
Can anyone tell me if there is a trick, specific set of protocols, registry setting, something that will kill this delay?
I was also experiencing a 5-10 second delay on every connect, using the official Microsoft SQL drivers for PHP (as suggested by #gaRex) - none of the answers posted here solved it for me.
As suggested by #ircmaxell, my problem was a DNS issue - and the solution was to edit the \windows\system32\drivers\etc\hosts file (your local local host file) and add the name of my own machine to it.
In the "system properties" dialog, find the "computer name" of your machine - then add a line like 127.0.0.1 my-computer to your local host file.
For me, the delay occurred once more, on the following attempt to load the page - after that, it was super fast, no delay at all.
Note that this problem may occur even on a physical machine, not only on a VM.
I came across network issues when running virtual pc, everything network related is slow, try adding this entry on your registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
Create new DWORD value named DisableTaskOffload and set its value to 1.
Restart the computer.
It worked for me, source.
Is it perhaps a DNS issue? I know that MySQL does a reverse DNS lookup on each login (not each connection). If you don't have a reverse dns record for your server (or your dns is slow) it can cause a major delay at login. There's an option in MySQL to disable that. I'm not sure about SQL Server, but I'd assume it may be doing something similar...
I remember the same problem, but forgot, how we have solve it.
To clarify please specify exact connect strings, your SQLserver versions and also try to start this old good utility c:\WINDOWS\system32\cliconfg.exe, which is also can bring some light.
Yes, I know, it's from 2k, but guys at m$ don't like to create client tools from scratch.
Also try to get "right" mssql client dlls for PHP.

Categories