Accessing a database located on another computer using PHP - php

My client has an offline product database for a high street shop that they update fairly frequently for their own purposes. They are now creating an online store which they want to use product information from this database.
Migrating the database to a hosted server and abandoning the offline database is not an option due to their current legacy software set up.
So my question is: how can I get the information from their offline database to an online database? Their local server is always connected to the internet so is it possible to create a script on the website that somehow grabs the data from their server and imports it into the online server? If this ran every 24 hours it would be perfect. But is it even possible? And if so how would I do it?
The only other option I can think of is to manually upload the database after every update, but this isn't really a viable idea.

I did something like this with quickbooks using an odbc connection. Using that I synced data to MySQL. This synchronization however, was just one way. Unless you have keys in the data that indicates when something was changed (updated date), you will end up syncing alot of extra data.
Using SQLYog, i set up a scheduled job that connected to the odbc data source, and pushed the changes since last sync to the mysql database I was using to generate reports. If you can get the data replicated into MySQL it should be easy at that point to make use of it in your online store.
The downside is that it wont be realtime. Inventory could become a problem.

In an ideal world I would look at creating a restful API that would run on the same server or at least run on the same network as your offline database. This restful API would run as a web server via http and return JSON or even XML structures of data from the offline database. Clients running on the internet would be able to connect and fetch any data they need, at any time. A restful API like this has a number of advantages.
Firstly it's secure. You don't have to open up an attack vector to the public by making connections to your offline database public. The only thing you have to do is enable public access to your restful API. In your API's logic you might not even include functionality to write to the database so even if your API's security is compromised at very worst all attackers can do is read your data, not corrupt it.
Having a restful api in this situation represents a good separation of concerns. Your client code should not know anything about the database nor should it know about any internal systems that the offline database uses. What happens when your clients want to update their offline system or even change it? In this situation all you would have to do is update the restful api. Your client that is connecting to the data no longer cares about anything else but the api so changing databases would be easy.
Another reason to consider an API is concurrency. I hinted at this before but having an API would be great if you ever need to have more than one client accessing the offline databases' data. In a web server set up where you would have the API sitting and waiting for requests there is no reason why you could not have more than one client connecting to the api at the same time. HTTP is really good at this!
You talked about having to place old data in a new database. Something like this could be done easily with a restful API as you would just have to map the endpoints of your API to tables in the new database and run that when you need. You could even forgo the new database and use the API as your backend. This solution would require some caching but it would cut down on the duplication of a database if you don't feel it's needed.
The draw back to all of this is the fact that writing an API over a script is more complex. So in this situation I believe in horses for courses. If this database is the backbone of a long term project that will be expanding in the future an API is the way to go. If its a small part of your project then maybe you can swing it with a script that runs every 24 hours however I have done this before and the second I have to change/edit the solution things start getting a little "hairy". Hope this helps and good luck with it.

Related

Electron desktop app with online server and database?

I am working on a desktop application with electron and I am considering online storage to store data. I would like to get some idea on the approach as I couldn't find reliable answers from google search.
Approach 1. electron app (front end ) + php (like purchase a hosting package from godaddy with a domain e.g: www.mysite.com)
with this approach I am planning to create api calls in php to perform basic CRUD.
is this is a good way?
will this affect the speed/load time?
are there better ways for this situation?
Thank you very much in advance for the help.
Well, this is not an easy topic. Your solution could work: you Electron app ask your server for data and store data to it. Anyway the best solution depends from your application.
The most important points that you have to ask yourself are:
How often do you need to reach your server ?
Your users could work without data from server ?
How long does it takes to read and store data on your server ? (it's different if you store some kb or many gb of data)
The data stored online must be shared with other users or every user has access to its own data ?
If all the information are stored in your server your startup have to wait for the request to complete but you can show a loader or something like this to mitigate the waiting.
In my opinion you have many choices, from the simplest (and slowest) to the most complex (but that mitigate network lag):
Simple AJAX requests to your server: as you described you will do some HTTP requests to your server and read and write data to be displayed on your application. Your user will have to wait for the requests to complete. Show them some loading animations to mitigate the wait.
There are some solutions that save the data locally to your electron installation and then sync them online, Have a check to PuchDB for an example
Recently I'm looking at GraphQL. GraphQL is an API to query your data. It's not that easy but it has some interesting features, it has an internal cache and it's already studied for optimistic update. You update your application immediately thinking that your POST will be fine and then if something goes wrong you update it accordingly.
I'd also like to suggest you to try some solutions as a service. You don't have a server already and you will have to open a new contract so why don't you check some dedicated service like Firebase? Google Firebase Realtime Database allows you to work in javascript (just one language involved in the project), sync your data online automatically and between devices without the need to write any webservice. I'have just played with it for some prototypes but it looks very interesting and it's cheap. It also has a free plan that it's enough for many users.
Keep in mind that if your user has access only to their data the fastest and easies solution is to use a database inside your electron application. A sqlite database, an IndexDB database or even serialize to JSON and then store everything in localstorage (if your data fits size limits).
Hope this helps

Efficiently communicating between frontend and backend sites without exposing backend

I'm trying to figure out whether my current approach will lead me into performance issues into the future, before developing further with this design, and whether there are better ways of doing this. I think this makes the most sense if I provide some context on the design first:
Current Design
I currently have my environment designed with two separate servers, let's call them frontend and backend.
Frontend
This server is open to the world. Customers access this site to view our product, make purchases, and will soon be able to view their account related information.
Backend
This server is where all information is held in a database.
Communication
The only way that the frontend currently needs access to the backend, is when the user authenticates with their license and downloads our product. To do this, the frontend calls a PHP script, which sends a JSON request to the backend server via curl_exec. The response from the backend tells the frontend how to handle that download request (e.g. license invalid).
Reasoning
The reason for this design is to avoid exposing the backend details to the user. Client-side, all the user sees is a request being sent to the frontend server. If the frontend server is ever compromised, anyone reading through how the frontend is built has no access to the backend DB, unless they know exactly what parameters to send to the backend API. Even then, it only gives access to a very low subset of information, depending on what the API exposes.
The Problem
The only time this cross-server communication happens right now is when a user tries to download our products using their license details. Relatively speaking, the traffic through this API between both servers is relatively low.
My concerns are that I want to build a user "control panel". From here they can log in with their license/account, they can view their active licenses, access details on previous orders they made, etc. This already means all these pieces of information are only available through the backend, so I'll need to expose them through the API - which is fine. The issue here is that every request the user makes through the control panel (even just refreshing the page) will build up a lot of traffic between both servers.
Questions
From the experience of developers here, is this communication design scalable? I'm worried I'm building around a bottleneck, which will just result in a slow user interface, since the frontend would end up waiting on a lot of requests it tunnelled through to the backend.
What are your thoughts? Has anyone faced a similar challenge? How did you overcome that challenge? What is the best practice to achieving this kind of requirement? I hope this question doesn't come across as too vague.
I would love to hear other answers but I will share my thoughts.
First, let's call your servers:
Application Server
Database Server
It seems that you are worried about creating a bottleneck due to an increase in the amount of database queries. Since you mentioned that these queries would execute after a page refresh, it's clear that you are not using a cache of any sort. If you could cache the database queries and invalidate the cache only if the data has changed (i.e. the user's actions cause the data to change, so the cache should be cleared) then you will increase performance drastically.
If anyone gains access to your application server, they will most likely be able to access the database server with the user that you've allowed the application server to use. You should give this user as little permissions as necessary to use the API. Still, they may be able to access a lot depending on what your API allows and what you have cached on the application server.
Take a look at Laravel's cache API which allows you to use your cache in place of a database query. If the cache does not exist, the database query would be executed and cached. Then you would delete necessary cache's based on user actions. You can also asynchronously recache database requests so you don't prevent a response to the client if the data is not needed for that response.
I hope this helps.
UPDATE:
After discussing with you further, I better understand your dilemma. You are trying to increase the security of your application by requiring all API calls to go through an extra step of being initiated after a POST request. I agree that this is going to be a bottleneck as the application scales since you won't be able to take advantage of caching and every page request will result in database queries.
What I have done in a similar case is to separate the application server and database server except the database server is literally only a database server without any logic/scripts. PHP, for example, is not even installed on the database server. Database servers and applications servers are only connected via private networking, so database servers are only accessible via the application server. A safe user has been set up to use the remote database.
Since my database queries take a lot of time, I cache as much as possible.
Also consider using https://cloudflare.com It is a reverse proxy to the application server which adds another layer between the client (browser) and your application server. This way, only cloudflare has access to your application server, and only your application server has access to your database server via the safe database user you create.
im no expert on database but using prepared statements
would help you a lot, as it is more secure as well as the best part is..
"Bound parameters minimize bandwidth to the server as you need send only the parameters each time, and not the whole query"
Hope it helps!

Database-driven web app: How to handle offline use

I'm developing a web app using Laravel (a PHP framework). The app is going to be used by about 30 of my co-workers on their Windows laptops.
My co-workers interview people on a regular basis. They will use the web app to add a new profile to a database once they interview somebody for the first time and they will append notes to these profiles on subsequent visits. Profiles and notes are stored using MySQL, but since I'm using Laravel, I could easily switch to another database.
Sometimes, my co-workers have to interview people when they're offline. They might visit a group of interviewees, add a few profiles and add some notes to existing ones during a session without any internet access.
How should I approach this?
With a local web server on every laptop. I've seen applications ship with some kind of installer including a LAMP stack, but I can't find any documentation on this.
I could install the app and something like XAMPP on every laptop
myself. That would be possible, but in the future more people might use the app and not all of them might be located nearby.
I could use Service Workers, maybe in connection with a libray such
as UpUp. This seems to be the most elegant approach.
I'd like to give option (3) a try, but my app is database driven and I'm not sure whether I could realize this approach:
Would it be possible to write all the (relevant) data from the DB to - let's say - a JSON file which could be accessed instead of the DB when in offline mode? We don't have to handle much data (less than 100 small data records should be available during an interview session).
When my co-workers add profiles or notes in offline mode, is there any "Web Service" way to insert data into the db that has been entered?
Thanks
Pida
I would think of it as building the app in "two parts".
First, the front end uses ajax calls to the back end (which isn't anything but a REST API). If there isn't any network connection, store the data in the browser using local storage.
When the user later has network connection, you could send the data that exists in the local storage to the back end and clear the local storage.
If you add web servers on the laptops, the databases and info will only be stored on their local laptops and would not be synced.
You can build what you describe using service workers to cache your site's static content to make it available offline, and a specific fetch handler in the service worker to detect a failed PUT or POST and queue the data in IndexedDB. You'd then periodically check IndexedDB for any queued data when your web app is loaded, and attempt to resend it.
I've described this approach in more detail at https://developers.google.com/web/showcase/case-study/service-workers-iowa#updates-to-users-schedules
That article assumes the use of the sw-precache library for caching your site's static assets, and the sw-toolbox library to provide runtime fetch handlers that check for failed business-logic requests. It also uses a promise-based IndexedDB wrapper called simpleDB although I'd probably go with the more recent idb library nowadays.

How to update a remote ms access database?

i need to create a webapp to show and allow editing for a set of data.
This data is contained in an Access Database file, used by another application (a desktop application).
I'm evaluating the best way to carry out this job.
Unfortunatly my purpose to migrate to another database solution (rdbms such as MySQL or Postgres) was rejected by the customer.
The issue here is how to keep data integrity and syncronized between the server and the desktop that executes the application that also uses this data.
All I need to do is, read data, store edited or new data, give to authorized users an interface to review this new inserted data -thus validating it-, and import this to the original access database.
I've found the following possible solutions (to update the desktop mdb copy), but each of them has pros and cons:
remote access to the windows machine
exposes the machine to unauthorized access
use rsync to keep files syncronized (once a day)
if the mdb on the client has been edited with the desktop application there will be data loss
can be update only when all data has been validated
there won't be real syncronized data (until rsync will run)
client-server applications
can use secure layers to protect data against attackers
a 3rd application (on the desktop) is required
syncronization requires authorized users to use this 3rd application to import data (that will query the remote db and update the local mdb)
Do you know some other way that could help me to get this done?
I'm oriented on the client-server model, also if this would be more expensive, but it's the only way I see to make this work.
Do you see some other pros/cons of the purposed solution?
I didn't choose the PL to develop this, but I was thinking to use either PHP and/or Python.
The remote environment (for the server) can either be Windows or *nix (preferred).
Thanks.
The first idea:
exposes the machine to unauthorized access
This is not really a valid argument. Everything you put on the Internet is exposed. An it is not like it cannot be further protected via SSL/TLS. Even RDP can be secured via a SSH tunnel, for example.
To my mind, the easiest way and most elegant way to do is by using web services (SOAP). Write the server code that does inserts/updates on the Access database with something like a Python or Java. Generate a WSDL from the working code. From the WSDL you can generate a client for PHP/Python. Now all you have to do is to write the web interface that uses the PHP/Python client.
For security using SSL and Basic authentication should be enough (supported by SOAPpy in the case of Python, for example).
You can use pyodbc to connect to the Access database.
well you can use 2 db and syncronize changes with a sort of web service between them.
seperating web server Db (which you could use a modern mysql or whatever) and the current access Db
You should build a sort of a Rest Api returning new or changed records against GET method, Deleting against DELETE method etc. using a timestamp in the http method.
and then you could query at each side with a scheduled job for new records at the other side (transferring with json) resulting in keeping the records relatively insync.
You could take care of security with exposing the application db only in a certain port and only to http queries coming from the webapp server ip address. also using http auth, hashes etc..
if this isn't a heavy load, high concurrency app (which I guess it isn't since you use access as a Db) this should work.
you could build this kind of mini-api with any python webframework like turbogears 2.1,django or the mini frameworks like bottle or flask
p.s If you prefer python (and why wouldn't you) don't use pyodbc directly, work with python beautiful orm - sqlalchemy is much better
I think how this works really depends on the authentication issue and number of users that need to review the data.
The reason I ask?
You can consider using Access 2010 and office 365. This allows you to have linked tables to the cloud, but in fact the tables are also cached local to your Access desktop. This means that real time replication sync of data is used, and this is automatic for Access 2010 (so you don’t' have to write any code).
What this means is while running the Access desktop application, you can pull the plug on the network and it will continue to run. The instant you have a wifi or a connection, then changes local are synced up to office 365. Even better is you can now build web forms in Access.
Data touched or edited (or new records on either side) will come down the pipe to your local computer. So you add reords in Access client, the web users will ALSO see these new reocrds.
So Access 2010 now has web publishing, and this works with the new office 365. The price starts at $6 per month. And if just for a few users, then have them all logon using the same account! This means you can have this all up and running in less time than it took to make this post, and for less then $10 per month!
For those not aware, Access 2010 has web publishing. When you publish the Access forms, then are converted to .net forms (zammel/XAML) forms, and the code is converted to JavaScript. So form code actually runs browser side.
Since the system runs on office 365, then you using some heavy duty iron and you can in theory scale out to millions of users for this setup. When you publish the Access application to office 365, then on the server side you not using mdb or Access files anymore, but what is called Access Web Services. The tables in fact become the equilivant of SharePoint lists . And new for SP 2010 is those lists now have relational features like cascade delete.
The real beauty of this system is you can write and create and do everything inside of Access without have to learn or touch ANY KIND of server side technology. Here is short video of mine, and at the half way point I run the Access application with nothing more than a web browser.
http://www.youtube.com/watch?v=AU4mH0jPntI
There is no activeX or even Silverlight required. In fact my Access applications run fine on a iPad using the safari web browser.
So you could consider to continue using Access, and just publish your application to the web with the new Access 2010 features.

connect a database with externals tables

I never made something similar .
I have a system and i need to relate my data with external data (in another database).
My preference is get these data and create my own tables, but in this case when the other dbs are updated my personal tables will be obsolete.
So, basically i need to synchronize my tables with external tables, or just get the external data values.
I don't have any idea how i can connect and relate data from ten external databases.
I need to check if an user is registered in another websites basically.
Any help?
I am crrently doing something similar.
Easiset way I found is to pull the data in, though I do bi-directional syncronisation in my project you haven't mentionned this so I imagine it's a data pull you are aiming for .
You need to have user accounts on the other servers, and the account needs to be created with an ip instead of 'localhost'. You will connect from your end through mysql client using the ip of distant host instead of the ususal localhost.
see this page for a bit more info.
If, like me you have to interface to different db server types, I recommend using a database abstraction library to ease the managing of data in a seamless way across different sql servers. I chose Zend_db components, used standaline with Zend_config as they Support MySQL and MSSQL.
UPDATE - Using a proxy DB to access mission critical data
Depending on the scope of your project, if the data is not accessible straight from remote database, there are different possibilities. To answer your comment I will tell you how we resolved the same issues on the current project I am tied to. The client has a big MSSQL database that is is business critical application, accounting, invoicing, inventory, everything is handled by one big app tied to MSSQL. My mandate is to install a CRM , and synchronise the customers of his MSSQL mission-critical-app into the CRM, running on MySQL by the way.
I did not want to access this data straight from my CRM, this CRM should not ever touch their main MSSQL DB, I certainly am not willing to take the responsibility of something ever going wrong down the line, even though in theory this should not happen, in practice theory is often worthless. The recommandation I gave (and was implemented) was to setup a proxy database, on their end. That database located on the same MSSQL instance has a task that copies the data in a second database, nightly. This one, I am free to access remotely. A user was created on MSSQL with just access to the proxy, and connection accepted just from one ip.
My scipt has to sync both ways so in my case I do a nightly 'push' the modified records from MSSQL to the crm and 'pull' the added CRM records in the proxy DB. The intern gets notified by email of new record in proxy to update to their MSSQL app. Hope this was clear enough I realize it's hard to convey clearly in a few lines. If you have other questions feel free to ask.
Good-luck!
You have to download the backup (gzip,zip) of the wanted part(or all) of the Database and upload it to the another Database.
Btw. cronjobs wont help you at this point, because you cant have an access to any DB from outside.
Does the other website have an API for accessing such information? Are they capable of constructing one? If so, that would be the best way.
Otherwise, I presume your way of getting data from their database is by directly querying it. That can work to, just make a mysql_connect to their location and query it just like it was your own database. Note: their db will have to be setup to work with outside connections for this method to work.

Categories