I am working on changing a Drupal 7 site and have run into strange behavior where an old version of a file I've changed keeps re-appearing. I've flushed caches via the admin interface as well as truncating the cache_ tables.
On my staging server (which I have access to), things work fine. On our production server (which I do not have SSH access to and cannot easily get access to), they do not and I have limited ability to debug, so I have to guess. I suspect there is some Drupal or Apache setting that is causing these old files to be shown because the filesystem has identical contents. The behavior is almost as if Drupal will look for any file named the same (even if it is in the wrong directory) and show that.
In my case, I have all my files under /var/www/html (standard LAMP setup). At one point, I tar cfz the entire thing and kept that at /var/www/html/archive.tgz (not removing it by mistake). So now I'm wondering if somehow Drupal is reading the contents of that archive and using the old file. Sounds crazy, but has anyone run into something like that?
The other possibility is that my cache cleaning is still limited in some way. Outside of truncating cache_ tables in the database, is there any way to forcibly remove all cached entries? Any insight into this mystery would be appreciated.
Obviously, your production server runs some additional caching proxy like Varnish. You need to clear cache there as well.
Related
I have a Symfony2 website that I'm testing in production. I went ahead and cleared its cache because I've made and will probably make more modifications, however there is a small problem:
While the cache is being cleared and say, afterwards I want to warm it up, someone that accesses the website rebuilds the cache. That creates a small problem as the cache is being built, but not completely, while half of it gets deleted because the deletion is still in progress.
What happens afterwards is, the cache is built, but only a part of it. Symfony thinks that the cache is built entirely, and runs without trying to build it anymore, but it runs on a half-built cache. The deletion process is a bit long (~15 sec), so in this timeframe nobody must try and create the cache by accessing the website.
Either that, or the cache is completely built, it overwrites the old cache, and the system treats these new files as old ones, deletes part of them and some others remain. Not entirely sure, I'm not sure how to check this.
For instance, one of the errors that I'd get is
The directory "D:\xampp\htdocs\med-app\app\app\cache\dev/jms_diextra/metadata" does not exist.
If I wouldn't use that bundle I'd get another cache problem from Doctrine. This appears at every website access until I delete the cache again WITHOUT anyone accessing the website. it completely blocks access to the website and makes it non-functional.
Also, what about the warmup? That takes a while, too. What if someone accesses the website while the cache is being warmed up? Doesn't that create a conflict, too?
How to handle this problem? Do I need to close the apache service, clear and warm cache and then restart apache? How is this handled with a website in production?
EDIT
Something interesting that I have discovered. The bug occurs when I delete the cache/prod folder. If I delete the contents of the folder without deleting the folder itself, it seems the bug does not occur. I wonder why.
Usually it is good practice to lock the website into maintenance mode if you're performing updates, or clearing the cache for any other reason in the production. Sometimes web hosting services have this option to handle this for you, or there is a nice bundle for handling maintenance easily from the command line.
This way you can safely delete the cache and be sure no-one visits the page and rebuilds the cache incorrectly.
Usually if you have to clear the Symfony cache it means you're updating to a new version - so not only are you having to clear the cache, but you're probably having to dump assets and perform other tasks. In this case what I've done in the past that has worked very well is to treat each production release as its own version n its own folder - so when you install a new version you do it unconnected from the webserver, and then just change your webserver to point to the new version when you are done. The added benefit is if you mess something up and have to perform a rollback, you just immediately link back to the previous version.
For example, say your Apache config has DocumentRoot always points to a specific location:
DocumentRoot /var/www/mysite/web
You would make that root a symlink to your latest version:
/var/www/mysite/web -> /var/www/versions/1.0/web
Now say you have version 1.1 of your site to install. You simply install it to /var/www/versions/1.1 - put the code there, install your assets, update the cache, etc. Then simply change the symlink:
/var/www/mysite/web -> /var/www/versions/1.1/web
Now if the site crashes horribly you can simply point the symlink back. The benefit here is that there is no downtime to your site and it's easy to rollback if you made a mistake. To automate this I use a bash script that installs a new version and updates the symlinks with a series of commands connected via && so if one step of the install fails, the whole install fails and you're not stuck between version limbo.
Granted there are probably better ways to do all of the above or ways to automate it further, but the point is if you're changing production you'll want to perform the Symfony installation/setup without letting users interfere with that.
I am in the process of migrating an existing Drupal website from another provider to Bluehost.com -- while I think using Bluehost.com is not relevant in this context I thought I'd mention it anyway, in case there are indeed some particularities I'm not aware of.
The site is a Drupal 6 installation and it did work previously I am told on bluehost too so you think it shouldn't be any problems, however, having copied it over I encounter a big problem: all the responses from Drupal are sent with Content-Encoding set to application/x-gzip. This has the implication of all browser presenting a download dialog box rather than rendering the content.
I have actually curl'd the index page and ran it through gunzip and the output is the correct HTML for the site -- just that it somehow ends up being gzip'd and this mangles the content type and confuses the browsers.
Talking to previous maintainers of the site they suggested using PHP 5.4 (they were running it on php 5.5 as I understand and despite all the Drupal suggestions it was running perfectly well I'm told).
I am trying to eliminate now any type of gzip'ing that occurs here so I've got it down to a few layers which could cause it but eliminating those it still doesn't work:
SetEnv no-gzip 1 in .htaccess
zlib.output_compression = Off in php.ini
drupal had the boost module installed and some corresponding settings in .htaccess -- i've removed those from the .htaccess file as well as deleting the boost directory from sites/all/modules
The problem still stands and my files are being sent to the browser compressed. Is there any other way to disable this?
Note that this only happens for pages inside Drupal, having uploaded a simple php page and navigate to that url works fine -- which suggests therefore a drupal (rather than apache/php) problem.
I've noticed a module mimedetect which has a definition for application/x-gzip in there but not sure how could this affect it as removing this didn't render anything useful either.
Any ideas where to look and/or what might cause it?
Happy to provide any other insights that might be useful in diagnosing this.
Ok so having actually reset the database cache and with the settings above this now works. I'm trying to figure out which one of the above actually solved it.
Im relatively new to silverstripe cms and Im trying to move a site to a new host.
I followed all the instructions I could find on their official forums but to no avail.
Here is what I mean.
Here you can see what the site should look like
http://www.efekto.co.za
But this is what it looks like after I have moved it (copied everything to my public_html folder on the new site, set up the db username and passwords etc)
Please help cause I'm utterly and truly stuck at the moment.
Since someone else initialy moved the site so I decided to delete everything and move everything myself. Its hard to explain otherwise what I have tried or not tried.
First I tried to install just a base silverstripe. Got that right pretty much but only after I specified the database ip as 127.0.0.1 and not the external ip. So I deleted the base install again, copied over all the site files again and this time modified the db params to use 127.0.0.1 Some of the pages now actually reflect valid content but it seems that everything is not this light blue kind of color (seems to be a default of SS? ) It is as if it's missing some kind of master page or something as I can see content, but no module thingies like menus, blog section etc.
I also have to mention that to test this I change my hosts file so that www.efecto.co.za resolves to the sites new ip address with our new host. So from my machine it resolves to new ip but from the servers perspective when it fetches things like css its obviously going to fecth it from where www.efekto.co.za is currently hosted. Hope that makes sense?
So 1 step in the right direction at least. We have content and no more errors. Now whats up with this blue color scheme??
http://www.efekto.co.za seems to be fine now.
My guess: You've had a silverstripe-cache folder on your old server, which you've copied. It contains (as you might suspect) cached files with absolute paths. If your path structure is not exactly the same on both servers, you will run into problems. So always remove everything from the cache folder when moving sites around.
If that isn't the problem, take a look at the Apache log file (probably /var/log/apache2/error.log, but this can vary). As it's a server error, it should tell you what the problem is...
Check the error log in cpanel or whatever, perhaps a PHP or APACHE directive is different on this server. You'll find out from the logs what the problem is.
I got this once and the problem was that /dev/build wouldn't run because the php memory upper limit was set higher in SS than it was on the server.
Also go into _config and set the environment directive to 'debug', this will display as much output as SS can give you on the page.
I've just gotten my production site up and running. I have a lot more work to do, and I'm realizing the need now for a development server before pushing changing live onto the production site (with users) - obviously...
This thread (and a lot more on Stack) describe me:
Best/Better/Optimal way to setup a Staging/Development server
Anyhow... Reading these threads is outright confusing at times, with all of the thrown around terminology, and my smaller CentOS/Apache knowledge.
My goal is to:
Make some changes to files as needed.
Test the changes on the same server, to ensure settings are identical.
If all is ok, I can now save a version of this somewhere, perhaps locally is enough for now (Bazaar seems like a possibility?)
Finally, replace all of the changed files via SSH or SFTP or something...
My worries are:
Uploading changes while users are in the system.
How to upload the files that have changed, only.
I'd love somebody to either link to a great guide for what I'm thinking about (that leaves nothing to imagination I'd hope) - or some kind of suggestion/etc... I'm running in circles trying out different SVN's and progarms to manager them, etc...
I'm the only one developing, and just want a repeatable, trust-worthy solution that can work for me without making my life too miserable trying to get it set up (and keep it set up).
Thanks much.
If you have the ability to create a staging subdomain on the production server, here is how I would (and do) handle it:
Develop on your development machine, storing your code in a VCS. I use subversion, but you may find another you prefer. After making changes, you check in your code.
On your production server, you create a subdomain in an Apache VirtualHost which is identical to, but isolated from your production VirtualHost. Checkout your code from the VCS to the staging subdomain area. After making changes, you then run an update from your VCS, which pulls only changed files down. Staging and production share the same data set, or you may have a separate database for each.
The reason for using a subdomain instead of just a different directory is that it enables you to use the same DocumentRoot for both staging and production. It's also easy to identify where you are if you use something like staging.example.com.
When you're certain everything works as it's supposed to you can run a VCS update on the production side to bring the code up to date.
It is important to be sure that you have instructed Apache to forbid access to the VCS metadata directories (.svn, .git, whatever).
Addendum
To forbid access to .svn directories use a rewrite rule like:
RewriteEngine on
RewriteRule .*\.svn/.* - [F]
This will send a 403 on them. You could also redirect them to the homepage to make it less obvious they're even present.
In terms of worry #1, remember that even StackOverflow periodically goes down for maintenance when people are using it. Just provide a good mechanism for you to switch the site to maintenance mode (and back out) and you'll be fine.
Thank you everyone for the tips/hints/etc...
I've finally found the perfect solution for my needs, SpringLoops... http://www.springloops.com/v2/
It manages my SVN (which I use Tortoise SVN with) - and will actually deploy the changes onto my two sites: staging and production.
I highly recommend it, it's working wonderfully so far.
Thanks!
You need a version control system, like Subversion, Git or Mercurial.
When you change files, you commit those changes to a repository. This keeps track of all the changes you've made and lets you undo them if it turns out you did something stupid. It also lets you restore files you accidentally delete or lose.
On top of that, it makes deploying updates as simple as typing 'git update' or 'svn update' on the production server. Only the files that changed get transferred and put in place.
You will get this advice no matter how many times the question is re-asked. You need version control.
I am running the site at www.euroworker.no, it's a linux server and the site has a backend editor. It's a smarty/php site, and when I try to update a few of the .tpl's (two or three) they don't update. I have tried uploading through FTP and that doesn't work either.
It runs on the livecart system.
any ideas?
Thanks!
Most likely, Smarty is fetching the template from the cache and not rebuilding it. If it's a one-time thing, just empty the cache directory or directories (templates_c). If it happens more often, you may have to adjust smarty's caching behaviour in the configuration (among others, $smarty->cachingand $smarty->cache_lifetime)
Are you saying that when you attempt to upload a new version it isn't updating the file? Or it's updating the file but the browser output does not conform to the new standards?
If it's the latter problem, delete all the files in your template_c directory. If it's the former problem, er, might want to check out ServerFault or SuperUser.