I've three application which share the same database. All three applications are built with php (Laravel framework). At the beginning there was only one app, so I've managed my database with the built-in migration feature from Laravel (modifying/creating/deleting tables and rolling these changes back). This way it was possible to track all changes in my version control.
Now, as I have three apps, I wonder where I should manage my database. Where is my Point of Truth? As I see, I have multiple options:
Using one app as "database-master". Only this app must change the structure of the database. All other apps only have access to read/write data.
Every app makes its own changes. This would result in a huge chaos as a rolling back / rebuilding from zero would be nearly impossible.
Using a third party tool (e.g. MySQL-Workbench) to edit my DB. Seems like a valid option to me as this tool is built to handle databases and their structure/data. But is there some kind of version control? Can I roll back changes I've made?
Using a fourth application (in my example another Laravel-app). This app is only responsible for the database. This way it would be possible to track all changes in git.
It's the first time I come across this problem and would be happy to gather some information about best-practices.
Edit
The applications share the same tables.
Related
I'm developing a web app using Laravel (a PHP framework). The app is going to be used by about 30 of my co-workers on their Windows laptops.
My co-workers interview people on a regular basis. They will use the web app to add a new profile to a database once they interview somebody for the first time and they will append notes to these profiles on subsequent visits. Profiles and notes are stored using MySQL, but since I'm using Laravel, I could easily switch to another database.
Sometimes, my co-workers have to interview people when they're offline. They might visit a group of interviewees, add a few profiles and add some notes to existing ones during a session without any internet access.
How should I approach this?
With a local web server on every laptop. I've seen applications ship with some kind of installer including a LAMP stack, but I can't find any documentation on this.
I could install the app and something like XAMPP on every laptop
myself. That would be possible, but in the future more people might use the app and not all of them might be located nearby.
I could use Service Workers, maybe in connection with a libray such
as UpUp. This seems to be the most elegant approach.
I'd like to give option (3) a try, but my app is database driven and I'm not sure whether I could realize this approach:
Would it be possible to write all the (relevant) data from the DB to - let's say - a JSON file which could be accessed instead of the DB when in offline mode? We don't have to handle much data (less than 100 small data records should be available during an interview session).
When my co-workers add profiles or notes in offline mode, is there any "Web Service" way to insert data into the db that has been entered?
Thanks
Pida
I would think of it as building the app in "two parts".
First, the front end uses ajax calls to the back end (which isn't anything but a REST API). If there isn't any network connection, store the data in the browser using local storage.
When the user later has network connection, you could send the data that exists in the local storage to the back end and clear the local storage.
If you add web servers on the laptops, the databases and info will only be stored on their local laptops and would not be synced.
You can build what you describe using service workers to cache your site's static content to make it available offline, and a specific fetch handler in the service worker to detect a failed PUT or POST and queue the data in IndexedDB. You'd then periodically check IndexedDB for any queued data when your web app is loaded, and attempt to resend it.
I've described this approach in more detail at https://developers.google.com/web/showcase/case-study/service-workers-iowa#updates-to-users-schedules
That article assumes the use of the sw-precache library for caching your site's static assets, and the sw-toolbox library to provide runtime fetch handlers that check for failed business-logic requests. It also uses a promise-based IndexedDB wrapper called simpleDB although I'd probably go with the more recent idb library nowadays.
I have a web applications that stores data in a MySQL database on-line. It also retrieves data using PHP code, performs calculations on the server and sends the result back to the user.
Data it's quite simple: names, descriptions, prices, VAT, hourly charges that are read from the database and manipulated on the server side.
Often client work in environments where the internet connection is poor or not available. In this case I would like the client to be able to work offline: enter new names, descriptions, prices and use the last VAT to perform calculations. Then synchronise all data as soon as a connection is available.
Now the problem is that I do not know what is the best way or technologies for achieving this. Don't worry, I am not asking to write code for me. Can you just explain to me what is the correct way to build such a system?
Is there a simple way to use my online MySQL and PHP code locally?
Should I save the data I need in a local file, rebuild the calculation in JavaScript, perform them locally and then synchronise the data if database is available.
Should I use two MySQL database, one local and one online and do a synchronisation between the two when data is available? If yes which technology (language) shall I use to perform this operation?
If possible, I would like an answer from PHP coders that worked on a similar project in the past and can give me detailed information on framework structure and technology to use. please remember that I am new to this way of writing application and I would appreciate if you can spare few minutes and explain everything to me like if I am six year old or stupid (which I am!)
I really appreciate any help and suggestion.
Ciao,
Donato
There are essentially 3 ways to go:
Version 1: "Old school": PHP-Gtk+ and bcompiler
first, if you not have done so already, you need to separate your business logic from your presentation layer (HTML, templating engines, ...) and database layer
then adapt your database layer, so that it can live with an alternative DB (local SQlite comes to mind) and perform synchronisation when online again
Finally use PHP-Gtk+ to create a new UI and pack all this with bcompiler
Version 2: "Standard": Take your server with you
Look at Server2Go, WampOnCD and friends to create a "double clickable webserver" (Start at Z-WAMP)
You still need to adapt your DB layer as in Version 1
Version 3: "Web 2.x": Move application from server to browser
Move your application logic from the server side (PHP) to the client side (JS)
Make your server part (PHP) only a data access or sync layer
Use the HTML5 offline features to replace your data access with local data if you are offline and to resync if online
Which one is best?
This depends on what you have and what you want. If most of your business logic is in PHP, then moving it into the browser might be prohibitingly expensive - be aware, that this also generates a whole new class of security nightmaares. I personally do not recommend porting this way, but I do recommend it for new apps, if the backing DB is not too big.
If you chose to keep your PHP business logic, then the desicion between 1 and 2 is often a quiestion of how much UI does your app have - if it's only a few CRUD forms, 1. might be a good idea - it is definitly the most portable (in the sense of taking it with you). If not, go with 2.
I have worked with similar system for ships. Internet is expensive in the middle of the ocean so they have local web servers installed with database synchronization via e-mail.
We also have created simple .exe packages so people with no experience can install the system or update system...
I wrote an application on VB6/Access for a retail shop almost 8 yrs ago. They are still using it, and now they are asking for changes/upgrade and want to access from multiple locations + multiple machine per location. Earlier it was just one machine per location.
All location is going to run the same application except only the Inventory and customers are different along with app settings. Inventory should be able to move to different location.
I lost touch with VB & Access, also I would like to rewrite the app with open source tools.
I'm a web developer PHP/MySQL and can do html5 if necessary. I believe I can rewrite all the functionalities with PHP/MySQL but I am not confident in printing.
The main requirement of the app is, it should print as fast as it can, should support several custom paper sizes.
Also the database should work distributed environment, all location should be able to work independently as well as able to sync updates when connected.
What is the best thing I can do in
this situation?
Would you recommend to create webapp, and do any desktop
client only for printing. i.e VB in
windows or shell script if linux? or
any alternative?
Any recommended workflow/links for Database setup/mirroring?
Modify the existing VB application to run with required MySQL architecture?
Sorry to violate one question per post rule, but I don't know how to split it.
Lets start with printing.
You could do a print CSS file. But its not very precise. That would get printed from the client browser.
Generate a PDF. With that you could print from the server or from the client. Server would be a faster option. Although multiple printers could get complicated.
Database sync:
I would treat the central database as a separate app and devise rules for each location to sync to the central location. You may not need to share all data, and just replicating the data you get into complex replication rules.
I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it?
Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components:
There is a load balancer that first receives requests and then decides where to route each request
This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client
If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache
A blackbox that handles database replication
Specifically, I am trying to do the following:
Setting up a load balancer (my homework revealed that HAProxy is one such load balancer)
Setting up replication so that databases can be synchronized
Using memcached
Configuring Apache to work with multiple web servers
Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage)
Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally.
I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?
Personally, I think you should be considering how your app will scale initially - as otherwise you'll run into problems down the line.
I'm not saying you need to build it initially as a multi-server system, but if you think you'll need to do it later, be mindful of the concerns now.
In my experience, this includes things like:
Sessions. Unless you use 'sticky' load balancing, you will have to have some way of sharing session state between servers. This probably means storing session data on either shared storage, or in a DB.
File uploads and replication. If you allow users to upload files, or you have a CMS that allows you to upload images/documents, it needs to cater for the fact that these files will also need to find their way onto other nodes in your cluster. However, if you've gone down the shared storage route mentioned above, this should cover it.
DB scalability. If you're using traditional DB servers, you might want to think about how you'll implement scalability at that level. This may mean coding your app so you use one connection string for reads, and another for writes. Then, you are free to implement replication with one master node handling the inserts/updates cascading the changes to read only nodes that handle the bulk of the work.
Middleware. You might even want to go down the route of implementing some kind of message oriented middleware solution to completely hand off business logic functions - this will give you a great level of flexibility in how you wish to scale this business logic layer in the future. Although initially this will be a lot of complication and work for not a great deal of payoff.
Have you considered playing around with VMs first? You can run 2-3 VMs on your local machine and set them up like you would actual servers, they just won't be able to handle real traffic levels. If all you're looking for is the learning experience, it might be an ideal way to go about it.
I have a dedicated server and I'm in need for building new version of my personal PHP5 CMS for my customers. Setting aside questions whether i should consider using open source, i need your opinions regarding CMS architecture.
First approach (since the server is completely in my control) is to build a centralized system that can support multiple sites from single administration panel. Basic idea is that I am able to log-in as super user, create new site (technically it creates new web root and new database, and maybe some other things), assign modules and plug-ins for specific customer or develop new ones if needed. If customer log-ins at this panel, he/she sees and can manage only their site content.
I have seen such a system (it was custom build), it's very nice to bug fixes and new features affects all customers instantly without need of patching every CMS that can also be on other hosting server...
The negative aspect I can see is scalability - what if i will need to add second server, how do I merge them to maintain single core.
The second is classical - stand-alone CMS for every customer.
What way will you go and why?
Thank you for your time.
If you were to have one central system for all clients, scalability could become easier. You can have one big database server and several identical web servers (probably behind a load balancer), and that way you don't have to worry about dividing the clients up into different servers. Your resources are pooled so if one client has a day with heavy traffic it can be taken up by several servers, rather than bringing one server (and all other clients' sites on it) to its knees.
You can get PHP sessions to work across multiple servers either by using 'sticky sessions' on your load-balancing configuration, or by getting PHP to store the data somewhere accessible to all servers (e.g. a database).
Keeping the web application files synchronised to one code base shouldn't be too difficult, there are tools like rsync that you could use to help you.
It really depends on the types of sites. That said, I would suggest that you consider using version control software to manage multiple installations. In practise, this can give you the same as with a centralised approach, but it gives you the freedom to postpone update of a single (or a number of) sites.