I am in the final stages of completing my project (vizulium - open-source photography CMS). I have one final remaining stumbling block: updating the software.
My idea that I wanting to implement is this:
Check newest version at Vizulium website (page just displays current stable version).
If newer version exists, and the user requests it:
a. Zip the updated files on Vizulium server
b. Download the files to the user's server
c. Unzip contents
I already have a tracking system in place that keeps track of the updates (datetime) that I push. I have not began step 2. All is in PHP and mySQL.
Is this a typical implementation of the problem? Do I need to clarify anything?
I am not using FTP since it is a self-install and I assume the user is programming-illiterate.
Your solution is valid, but needs a few extra considerations.
You should connect to your server via HTTPS and with certificate verification to query and fetch any available updates.
You should sign your updates with a private key and have the client verify the updates as authentic before applying them.
If you need to remove an obsolete file from an install, unzipping will not do this, perhaps have an "upgrade.php" script in each upgrade that is executed to perform any extra necessary steps.
Your upgrade script should backup the web directory and database before performing the upgrade, and retain the backup until the user requests to remove it.
Make your upgrades incremental, so to upgraded from 1 -> 3, you need to upgrade to version 2 first. This would be of-course transparent to the user, but would ensure that the upgrades between versions would be complete and all database updates/modifications are applied in the correct order.
Related
We're currently developing a 'sort of' e-commerce platform for our customers that are using our POS system.
This mainly exists of:
An Angular client-side
A PHP API as back-end
A MySQL database
Before I distribute the application to clients, I want to have a 'manageable' system for deploying and updating their platforms in case of code changes etc.
The initial setup would be:
Create database
Copy PHP files
Run composer
Run migrations
Modify configuration file for database credentials, salts, domain,..
Copy client side files
I was looking at Deployer for PHP, but I'm not sure how the whole database creation and config file modifications would work. I've originaly have the database creation in one of my migrations, but this would require a root db-user (or one with create permissions) and this user would need to be created as well.
The intial setup part could be done manually (it's not like it will be more than 5+ installations per week or so, but I would like to make it as simple as possible so that our support can do this instead of me every time)
The next part would be Updates.
I don't want to FTP to every server and apply changes. Updates can be both server side and client side. What would be the best way to do this:
Have a central system with all versions and registered websites at our end and let the client server daily check for a new version. If there is a new version, download all files from our server and run the migrations.
Push via deployer the new version to all clients. But this would overwrite or move the original config file with the DB credentials etc with the new version?
What if I need to add a new config setting? (application settings are stored in the database, but like the 'API' settings are within a config file.)
There will be a chance that all these client-servers will be distributed via our hosting provider, so we'll have access to all of them and they'll all be the same (for the configuration and such)
I've only written web applications used on one (server) location, so updating those were easy, for example via deploybot and such and the database setup was done manually, but now I'm stepping up my game and I want to make sure that I don't give myself more work than it should be.
Here's our case on developing an e-commerce platform - maybe you'll find answers to your questions there.
Codenetix spezializes in custom development, mostly web apps, so if you need help - let us know.
Good luck with your project!
We have Magento EE 1.14.0.1. recently we moved to new AWS EC2 server and ElasticCache Redis server. then some random products start disappearing in the frontend. They exist on backend and configured correctly ( visible , enabled , in stock , etc .... ). And only after you save the product in backend it will show up again in frontend even without flushing any cache.
Is this issue related to Redis cache ?
and if its how to fix it ?
Any input would be appreciated to direct me to a solution.
Thanks
Update: I marked everything under Index Management to Update on Save. so I revert that back to update on schedule. and I think that fixed the issue. but still I want to keep my store inventory up to date.
"It's an index issue, every time you update data (product, stock) from database, you have to manually re-index Magento."
That is true for Community Edition, not Enterprise Edition. In addition, there can be some extra issues when migrating to AWS. After 4 months of troubleshooting on an inherited server migrated into AWS I found a number of issues/solutions.
EE issues
Enterprise Edition indexing is asynchronous for many of the indexes. In addition, not all EE indexes are configured in the typical place.
On the Admin menu, select System > Configuration. In the panel on the left, under Advanced, select Index Management.
http://docs.magento.com/m1/ee/user_guide/system-operations/index-configuration.html
Even when set to "update on save" in my experience it frequently does not update on save.
AsyncIndexing was unstable in versions prior to 1.14.3.x .
Upgrade! It was possible for the partial process to break in such a way as to make it impossible for indexing to proceed. One way this will occur is if you are running PHP for the website[typically via PHPFPM] with a different userid and group then you run the cronjobs[shell access]. Indexing depends on the creation of a file to 'lock' the process - the file may only be written/deleted by the user which creates it.
I have found that for performance reasons, it is best to set ALL indexes to "update manually". Do not schedule a periodic reindex all process, it is useless due to async indexing. Just make sure your cron is running and everything should be fine.
The AsyncIndex process uses MySQL triggers...which have an issue when trying to migrate a magento database from one server to another. The way they are created initially, they can ONLY be used by the database user that when the triggers where first created. If you change the database user for the new server, the triggers will not migrate. Even worse, there is almost no indication that this occurred and everything except indexing runs perfectly so how can you tell?
Lastly, "reindex all" does not always reindex all. Thanks to various posts on the internet, I created a shell script to make Magento think all the products were updated and the index needs to be rebuilt:
https://gist.github.com/gamort/5dc5e16bdec00a8bb3b922fc463af17c
AWS issues
Using AWS Elasticache Redis has a hidden gotcha - the default zone it is launched in may be different then your server zone. In my case, the server was in USEAST-1a while Redis defaulted to USEAST-1b. This resulted in occasional timeouts when looking up data from the cache. While the website code can usually recover, the indexing code does not. This leads to the index cron process being in a broken state.
Almost as importantly, you will pay a trivial amount per GB for data transfer from zone 1a to 1b. But when your cache is working, this "trivial" amount can amount to a lot! We had a recurring $10+/day [$500-$600 a month] intrazone data transfer fee! Launch a new redis server in your actual zone, use the redis cli on your web server to make sure you can connect[we had firewall configuration issues] and then only THEN update your configuration.
AWS RDS server also have a hidden gotcha[hope your not too overwhelmed yet]. Migrating the database from another server to Amazon RDS has issues where there was an extremely slight change in what MySQL considers valid SQL for a specific function...which Magento EE just happens to use. :-) . I ended up installing a new copy of Magento EE and using Navicat to sync the database structures.
Solr issues
Suffice to say, there are Solr issues as well. Mostly due to the schema, but I also found that wiping the solr database and letting it reindex helped.
Magento Rewrite/Form issues
This issue occurs when you upgrade to 1.14.3 - which of course you should do since it fixes so many indexing issues. Version 1.14.3.x added form keys to a number of forms, including the customer sign up form. So if you created your own custom phtml templates for the logon they will not work! You need to add that form key field to your customization. Not a big deal though, since you documented what template file you copied it from initially right?
All in all, I would estimate going through the checklist for migration to be a good 20 hours, and possibly up to 80 depending on what issues you run into. And at the end of the day, since the fixes are mainly in cron jobs which are not easily visible the website owner will be hard pressed to tell how they benefitted from all that work. In my case, disappearing products had already been an issue for over a year before we inherited the site the client was understanding about the difficulties.
It's an index issue, every time you update data (product, stock) from database, you have to manually re-index Magento. If you don't do that, you'll have corrupted data on index and you'll lose SQL join on product request list.
I am working on a mobile application that communicates with an IIS server to synchronize data among application users.
The server is implemented in PHP and MySQL. The final procuct will consist of the server and the application. In other words, every client (company) is going to use a different server and the employees of each company will be the users of the mobile application. As soon as the application is released, bugs are expected to come up. Therefore, each synchronization server will require updates. The db schema and the PHP code will probably need to be altered. Using git to have clients fetch the most recent version of the server is not an option since the clients are not able to handle issues such as merge conflicts.
I need to automate the update process as much as possible. Is there any tool or piece of advice that would help me do so?
Thank you in advance for your assistance.
I would suggest for the MySQL part to write you own migartions(PHP scripts) which if carefully tested should do the DB migrations correctly. The customers MUST be forbidden to modify the database or you'll never be able to handle migrations correctly.
The second part with PHP sync, I really don't understand what's the problem using git - I think that the right way to go. I don't understand your concerns about the conflicts because the customers wont have to deal with this. When you merge the branches you will have to deal with the conflicts yourself and after you push it to the git server the clients will only have to "pull" the new version.
So to finalize you should create a script that when a new version is available should git pull the version and after that execute the DB migration script(if any).
SWho works with cached clients system knows that sometimes you have to update server and client files. So far I've managed to solve partially the problem, by making one call every time the software is opened to ask PHP what version of the software he's in. With the result, I compare to the version that Flex is in and voalá. Problem is, whenever I need to make an emergency update inside the business hour range, it's impossible to know how many clients have the Flex version already opened.
So to sunup: The cache problem I solved by controlling the version in start-up time, if your browser cached it, the version won't match with the server's app.
The only solution I can think to solve the 'already opened app' problem is to make a gateway between the PHP Services and Flex calls, where I would have to pass the Flex version and compare it inside the gateway, before the service is actually called, although I don't like this solution.
Any ideas?
Thanks.
You can download this application from Adobe website. http://labs.adobe.com/technologies/airlaunchpad/ It will allow you to build a new test app, and you need to select in the menu : "auto update" property. That will generate all the necessary files for you both for server and client.
The end result will have a server based xml file, and setup in each of the client apps to check on recurring basis if the xml file offers newer version of the application, and if true, automatically downloads and updates it. You can update the "check for update" frequency to your liking in the source code, by default it is tied to the application open event.
This frequent update will check for updates also while app is open, so it should solve your problem.
We're currently designing a rewrite of our PHP website. The new version will be under SVN version control and have a separate database for development and live sites.
Currently we have about 200,000 images on the site and we add around 5-10 a month. We'd like to have these images under SVN as well.
The current plan is to store and serve the images from the file system while serving their meta data from the database. Images will be served through a PHP imaging system with Apache rewrite rules so that http://host/image/ImageID will access a PHP script that queries the database for an image with the specified ID and (based on a path column in the table) returns the appropriate image.
The issue I'm having is keeping the image files and their meta data in sync between live and development sites.
Adding new images is (awkward, but) easy for the development team: we can add the image to our SVN repository in the same manner we do all files and manually create the meta data in both the live and test databases.
The problem arises when our employees need to upload new images through the website itself.
One viable solution I've been able to come up with is having our PHP upload script commit the new images to SVN and send INSERT queries to both live and development databases. But to me this seems inefficient. Plus SVN support in PHP is still experimental and I dislike having to rely on exec() calls.
I've also considered a third, separate database just for image meta data. As well as not storing the images in SVN (but they are part of the application and not just 'content' images that would be better off just being backed up).
I'd really like to keep images in SVN and if I do I need them to stay consistent with their meta data between the live and development site. I also have to provide a mechanism for user uploaded images.
What is the best way of handling this type of scenario?
The best way to handle this would be to use a separate process to keep your images and meta data in sync between live and dev. For the image files you can use a bash script running from cron to do a "svn add" and "svn commit" for any images uploaded to your live environment. Then you can run a periodic "svn up" in your dev environment to ensure that dev has the latest set. Mysql replication would be the best way to handle keeping the live and dev databases in sync given your data set. This solution assumes two things: 1) Data flows in one direction, from prod to dev and not the other way around. 2) Your users can tolerate a small degree of latency (the amount of time for which live and dev will be out of sync). The amount of latency will be directly proportional to the amount of data uploaded to prod. Given the 5 - 10 images added per month, latency should be infinitesimal.
I've had to solve this sort of problem for a number of different environments. Here's some of the techniques that I've used; some combination may solve your problem, or at least give you the right insight to solve your problem.
Version controlling application data during development
I worked on a database application that needed to be able to deliver certain data as part of the application. When we delivered a new version of the application, the database schema was likely to evolve, so we needed SQL scripts that would either (1) create all of the application tables from scratch, or (2) update all of the existing tables to match the new schema, add new tables, and drop unneeded tables. In addition, we needed to be able to prove that the upgrade scripts would work no matter which version of the application was being upgraded (we had no control of the deployment environment or upgrade schedules, so it was possible that a given site might need to upgrade from 1.1 to 1.3, skipping 1.2).
In this instance, what I did was take a tool that would dump the database as one large SQL script containing all of the table definitions and data. I then wrote a tool that split apart this huge script into separate files (fragments) for each table, stored procedure, function, etc. I wrote another tool that would take all of the fragments and produce a single SQL script. Finally, I wrote a third tool that was used during installation that would determine which scripts to run during installation based upon the state of the database and installed application. Once I was happy with the tools, I ran them against the current database, and then edited the fragments to eliminate extraneous data to leave only the parts that we wanted to ship. I then version-controlled the fragments along with a set of database dumps representing databases from the field.
My regression test for the database would involve restoring a database dump, running the installer to upgrade the database, and the dumping the result and splitting the dump into fragments, and then comparing the fragments against the committed version. If there were any differences, then that pointed to problems in the upgrade or installation fragments.
During development, the developers would run the installation tool to initialize (really upgrade) their development databases, then make their changes. They'd run the dump/split tool, and commit the changed fragments, along with an upgrade script that would upgrade any existing tables to match the new schema. A continuous integration server would check out the changes, build everything, and run all of the unit tests (including my database regression tests), then point the finger at any developer that forgot to commit all of their database changes (or the appropriate upgrade script).
Migrating Live data to a Test site
I build websites using Wordpress (on PHP and MySQL) and I need to keep 'live' and 'test' versions of each site. In particular, I frequently need to pull all of the data from 'live' to 'test' so that I can see how certain changes will look with live data. The data in this case is web pages, uploaded images, and image metadata, with the image metadata stored in MySQL. Each site has completely independent files and databases.
The approach that I worked out is a set of scripts that do the following:
Pull two sets (source and target) of database credentials and file locations from the configuration data.
Tar up the files in question for the source website.
Wipe out the file area for the target website.
Untar the files into the target file area.
Dump the tables in question for the source database to a file.
Delete all the data from the matching tables in the target database.
Load the table data from the dump file.
Run SQL queries to fix any source pathnames to match the target file area.
The same scripts could be used bidirectionally, so that they could be used to pull data to test from live or push site changes from test to live.
If you already have a solution to deal with data migration from dev to prod for your databases, why not store the actual images as BLOBs in the DB, along with the metadata?
As the images are requested, you can have a script write them to flat files on the server (or use something like mem_cache to help serve up common images) the first time, and then treat them as files afterwords (doing a file_exists() check or similar). Have your mod_rewrite script handle the DB lookup. This way, you will get the benefit of still having the majority of your users access 'flat' image files handled by your mod_rewrite script, and everything being nicely in sync with the various DBs. The downside is that your DBs get big of course.