Migrating Silverstripe CMS system from one server to another - php

Im relatively new to silverstripe cms and Im trying to move a site to a new host.
I followed all the instructions I could find on their official forums but to no avail.
Here is what I mean.
Here you can see what the site should look like
http://www.efekto.co.za
But this is what it looks like after I have moved it (copied everything to my public_html folder on the new site, set up the db username and passwords etc)
Please help cause I'm utterly and truly stuck at the moment.
Since someone else initialy moved the site so I decided to delete everything and move everything myself. Its hard to explain otherwise what I have tried or not tried.
First I tried to install just a base silverstripe. Got that right pretty much but only after I specified the database ip as 127.0.0.1 and not the external ip. So I deleted the base install again, copied over all the site files again and this time modified the db params to use 127.0.0.1 Some of the pages now actually reflect valid content but it seems that everything is not this light blue kind of color (seems to be a default of SS? ) It is as if it's missing some kind of master page or something as I can see content, but no module thingies like menus, blog section etc.
I also have to mention that to test this I change my hosts file so that www.efecto.co.za resolves to the sites new ip address with our new host. So from my machine it resolves to new ip but from the servers perspective when it fetches things like css its obviously going to fecth it from where www.efekto.co.za is currently hosted. Hope that makes sense?
So 1 step in the right direction at least. We have content and no more errors. Now whats up with this blue color scheme??

http://www.efekto.co.za seems to be fine now.
My guess: You've had a silverstripe-cache folder on your old server, which you've copied. It contains (as you might suspect) cached files with absolute paths. If your path structure is not exactly the same on both servers, you will run into problems. So always remove everything from the cache folder when moving sites around.
If that isn't the problem, take a look at the Apache log file (probably /var/log/apache2/error.log, but this can vary). As it's a server error, it should tell you what the problem is...

Check the error log in cpanel or whatever, perhaps a PHP or APACHE directive is different on this server. You'll find out from the logs what the problem is.
I got this once and the problem was that /dev/build wouldn't run because the php memory upper limit was set higher in SS than it was on the server.
Also go into _config and set the environment directive to 'debug', this will display as much output as SS can give you on the page.

Related

Why do host file entries ending in .local result in slow content download?

I just fixed a problem but do not exactly understand why the solution works.
Setting:
Windows 10
Laravel 7.0
PHP built in server (via php artisan serve --port=80)
Hosts file with entry 127.0.0.1 something.local
Problem:
Making a web page request to http://something.local in the browser took very long to load. Upon inspection in chrome dev tools I found out that it is not the server but the actual content download.
Although we are talking about ~7MB to download a download time of >10sec seemed insane to me.
Solution that I do not really understand :
Changing the hosts file entry to 127.0.0.1 something.habibi fixed it for me.
Why does it take so long to download a web page if I use an entry ending in .local in my hosts file?
Thoughts:
.local is not a top level domain but a special-use domain. It seems to me that because of this the request to a url ending in .local might not stay on my machine but goes through my wlan/router somehow and thus takes longer.
But this is as far as I got. It would be nice if someone can make better sense of this behaviour.
Unfortunately I can not make the special behavior of a special-use domain any clearer to you as this is not my forte at all.
But:
Do you - by any chance - have a chrome extension called "Xdebug helper" installed?
If this chrome extension would be the actual cause of your problem than your solution makes sense: changing the url in the browser has the effect that the Xdebug helper extension does not debug this new url you just put in.
Therefore you might think that your problem is caused by the different ending (switching from ".local" to ".habibi") but in reality it is just this chrome extension that gets disabled for the new url.
So long story short: your question is not reproducible and thus can't really be answered.
By the way: Others seem to have the same problem.

Wordpress Media Library does not display proper thumbnail

I have just migrated the website to new hosting and since then when I tried to upload new media files into the library, it always had HTTP error and media thumbnails are looking like in the attached picture.
I tried to find solution on the internet - removed and added new .htaccess file or added some code in it - none of these work. Has anyone ever faced this problem before? What's the solution?
Thank you,
Scott
This is a very annoying problem. In my experience, it causes due to three reasons;
Inappropriate folder permissions for wp-content/uploads folder.
If your site is using HTTPS through CDN like Cloudflare, it needs some additional configuration.
Another reason this might occur sometimes is if you change your theme or as in your case, the host. The solution for this one is to use this plugin Regenerate Thumbnails
You can share some more details for further discussion.
adding this as might help someone, an annoying error of my side.
Issue: migrated from localhost to a provider (in a fast maybe hacky way:)
exported the entire db from the local mysql (resulting in db.sql)
Changing the file db.sql:
changed the url 'localhost' to the domain, in the options-table (manually, typing it, the 2 rows)
replaced (as a String search-replace action) all localhost-links, globally (I used VIM :%s/localhost:8080/mydomain.com/g )
replaced (as a String search-replace action) the (default) table prefix "wp_" to 'mine': "mf_", globally (also VIM, as above)
Done with the db.sql. Now, uploaded (by filezilla) the files, recreated the db at the hosting provider by 'import' of the db.sql.
Did also: set all file and folder permissions as suggested in the web on many places. Did not forget to set stricter permissions for wp-config, and the 644 for .htaccess.
Issue: can access the application, can see (as admin) the media grid. However, all thumbnails were grey, and checking all suggestion as above and in other websites (like the access permissions) it seemed correctly set.
SOLUTION: In the table postmeta (!!!) there are attributes set to images, containing the text-part "wp". By changing the file db.sql globally, all "wp_" (the default prefix for the tables) to 'mine' ("mf_"), these attribute names were also changed.
(what i did then: exported the table postmeta and replace by VIM globally "mf_" again with "wp_", re-imported the table)
Please note: i am not a web developer and i am aware that it might not be the way supposed to be done, but it is the fastest, without much manual clicking around etc.
Maybe it helps someone, it happened a few times to me, so hopefully next time I myself will look into this stackoverflow answer, the idea of re-uploading the images (as suggested in several places) when they are already there is sort of daunting to me.

GoogleMapAPI:createMarkerIcon: Error reading image

I am getting the error GoogleMapAPI:createMarkerIcon: Error reading image /path/to/my/image.php when trying to load a map on my website. This only happens on my staging and live systems. Everything works fine on my dev machine. The files are completely the same on all three systems.
I couldn't find a definitive fix for the issue, but others did have it, too because there are a couple of threads on other boards regarding this issue exactly.
The path to the image is correct and the file is accessible.
I don't know if anyone is still having this issue. But I spent quite a while on GoogleMapAPI's createMarkerIcon() method because it would load the image just fine on my dev machine, but failed with the message GoogleMapAPI:createMarkerIcon: Error reading image /path/to/image.png on my staging and live machines.
I know that this has been a problem a couple of years ago and couldn't find any threads marked as 'solved' yet. So I figured, i'd share my insights with the world here.
For me, the problem was, that $_SERVER['DOCUMENT_ROOT'] returned a wrong directory. This is most likely the case, if you're using virtual hosts with some aliases configured. As long as you don't call the website over the alias, everything works fine. But as soon as you call the website with the alias, the $_SERVER variable fails to reflect the correct values. This also isn't only the case with the 'DOCUMENT_ROOT' index.
The Maps API, however, uses just this variable to determine the absolute location of the icon image. The workaround is quite simple, if you know what you're looking for. First double check that $_SERVER['DOCUMENT_ROOT'] returns the correct path. If it does in fact return the correct path and you're still getting the error, you'll need to keep looking for a solution. If it doesn't, you can easily write an override method for the API's createMarkerIcon() method. Just replace the $_SERVER['DOCUMENT_ROOT'] variable with your real document root. To retrieve it, use the following line in your index.php to create a constant with the correct path. The final slash is optional, but I recommend you add it.
define('DOCROOT',realpath(dirname(__FILE__).'/'));
That should do. Solved the error for me. Just don't make modifications to the API itself to maintain updatability.

Mystery: Drupal loading old version of file

I am working on changing a Drupal 7 site and have run into strange behavior where an old version of a file I've changed keeps re-appearing. I've flushed caches via the admin interface as well as truncating the cache_ tables.
On my staging server (which I have access to), things work fine. On our production server (which I do not have SSH access to and cannot easily get access to), they do not and I have limited ability to debug, so I have to guess. I suspect there is some Drupal or Apache setting that is causing these old files to be shown because the filesystem has identical contents. The behavior is almost as if Drupal will look for any file named the same (even if it is in the wrong directory) and show that.
In my case, I have all my files under /var/www/html (standard LAMP setup). At one point, I tar cfz the entire thing and kept that at /var/www/html/archive.tgz (not removing it by mistake). So now I'm wondering if somehow Drupal is reading the contents of that archive and using the old file. Sounds crazy, but has anyone run into something like that?
The other possibility is that my cache cleaning is still limited in some way. Outside of truncating cache_ tables in the database, is there any way to forcibly remove all cached entries? Any insight into this mystery would be appreciated.
Obviously, your production server runs some additional caching proxy like Varnish. You need to clear cache there as well.

PHP MySQL and Proper Development / Staging before sending to Production Server

I've just gotten my production site up and running. I have a lot more work to do, and I'm realizing the need now for a development server before pushing changing live onto the production site (with users) - obviously...
This thread (and a lot more on Stack) describe me:
Best/Better/Optimal way to setup a Staging/Development server
Anyhow... Reading these threads is outright confusing at times, with all of the thrown around terminology, and my smaller CentOS/Apache knowledge.
My goal is to:
Make some changes to files as needed.
Test the changes on the same server, to ensure settings are identical.
If all is ok, I can now save a version of this somewhere, perhaps locally is enough for now (Bazaar seems like a possibility?)
Finally, replace all of the changed files via SSH or SFTP or something...
My worries are:
Uploading changes while users are in the system.
How to upload the files that have changed, only.
I'd love somebody to either link to a great guide for what I'm thinking about (that leaves nothing to imagination I'd hope) - or some kind of suggestion/etc... I'm running in circles trying out different SVN's and progarms to manager them, etc...
I'm the only one developing, and just want a repeatable, trust-worthy solution that can work for me without making my life too miserable trying to get it set up (and keep it set up).
Thanks much.
If you have the ability to create a staging subdomain on the production server, here is how I would (and do) handle it:
Develop on your development machine, storing your code in a VCS. I use subversion, but you may find another you prefer. After making changes, you check in your code.
On your production server, you create a subdomain in an Apache VirtualHost which is identical to, but isolated from your production VirtualHost. Checkout your code from the VCS to the staging subdomain area. After making changes, you then run an update from your VCS, which pulls only changed files down. Staging and production share the same data set, or you may have a separate database for each.
The reason for using a subdomain instead of just a different directory is that it enables you to use the same DocumentRoot for both staging and production. It's also easy to identify where you are if you use something like staging.example.com.
When you're certain everything works as it's supposed to you can run a VCS update on the production side to bring the code up to date.
It is important to be sure that you have instructed Apache to forbid access to the VCS metadata directories (.svn, .git, whatever).
Addendum
To forbid access to .svn directories use a rewrite rule like:
RewriteEngine on
RewriteRule .*\.svn/.* - [F]
This will send a 403 on them. You could also redirect them to the homepage to make it less obvious they're even present.
In terms of worry #1, remember that even StackOverflow periodically goes down for maintenance when people are using it. Just provide a good mechanism for you to switch the site to maintenance mode (and back out) and you'll be fine.
Thank you everyone for the tips/hints/etc...
I've finally found the perfect solution for my needs, SpringLoops... http://www.springloops.com/v2/
It manages my SVN (which I use Tortoise SVN with) - and will actually deploy the changes onto my two sites: staging and production.
I highly recommend it, it's working wonderfully so far.
Thanks!
You need a version control system, like Subversion, Git or Mercurial.
When you change files, you commit those changes to a repository. This keeps track of all the changes you've made and lets you undo them if it turns out you did something stupid. It also lets you restore files you accidentally delete or lose.
On top of that, it makes deploying updates as simple as typing 'git update' or 'svn update' on the production server. Only the files that changed get transferred and put in place.
You will get this advice no matter how many times the question is re-asked. You need version control.

Categories