So we have a drupal 6 website that is running good, but now we want to prepare it for a lot of traffic, so the next step is to have 2 web servers running the same site (the database is already running on a separate server) and then use a another server to do the load balancing between those 2.
So yesterday i mirrored the files of the original drupal server (that runs at lets say www.example.com) to the new server (that runs at lets say 123.123.123.123 - just an IP, no domain), than i edited the settings.php file of the second one to make sure that the base url is 123.123.123.123.
once i browsed to 123.123.123.123 to test out if the mirror of the site was working, i got a blank page.. looking at the source, the basic structure was there, but no content, and the CSS was pointing to the right place but still not showing.. I decided to browse to 123.123.123.123/admin/ and see what i could do.. went to the site performance and cleared the cache, didn't do a thing but then i noticed the original drupal was now showing blank... so i went to www.example.com/admin/, cleared the cache also, site was back, but it appeared the menu router was destroyed because i was getting "page not found" everywhere. So i went to the modules page and clicked save hopping that it would rebuild the menu router. It did the trick, site was back online and working good.
Obviously i stopped poking around with 123.123.123.123 and decided it was time to ask for some help from the experts...
What am i doing wrong? Any help would be greatly appreciated!!
Julien
I don't think that out of the box you can do this with D6.
There are a couple of things which will catch you out.
Settings are stored in the database so if your servers are not identical one server will not work.
The database is not set up to work with more than one server accessing it. This could cause race conditions or deadlocks.
Uploaded or generated files will not be mirrored on both servers so files will be missing.
Probably other things too but this is enough to be going on with.
So you have two options:
Go with something like pressflow which is D6 compatible and has options for working on mirrored servers.
Configure your server to handle the load.
Configuring your server may be a good starting point. Here are some tips
make sure Drupal caching is turned on
Use an optcode cache like apc, see some benchmarks here
Install cache router module to use apc for Drupal's cache
install Boost module
There is a much more in depth article here
I would suggest reading the article and doing everything you can on one server. While it is possible to go to 2 or even 200 servers it adds a lot of complexity to your system.
Related
Evening All,
At by absolute wits end and hoping someone may be able to save me! I am in the process of migrating a number of PHP applications into Azure. I am using:
Linux based App Service running PHP 7.4 (2 vCPUs, 8Gb RAM) at a cost of £94 a month.
Azure Database on MySQL 8.0 (2 vCPUs) at £114 a month.
My PHP apps run well, decent load time of under 1 second per page. Wordpress performance however is awful. I am going from a 1 second page load to around 10 seconds, particularly on the back end. I have read all of the Azure guides and have implemented the following obvious points:
Both the App Service and the MySQL install are in the same data center
App Service is set to 'Always On'
Connection Redirection is set to Preferred and tested as working
The same app runs fine on a very basic £10 or so a month shared hosting package. I have also tried the same setup in Amazon Web Services today and page load is back to a second or so.
In Chrome Console, the delay is in TTFB. I have disabled all the plugins and none stand out as making a huge difference. Each adds a second or so page load, suggesting to me a consistent issue when a page requires a number of database calls.
What is going on with Azure and the awful Wordpress performance?! Is there anything else I can investigate or try? Really keen to stay with Azure but can't cope with the huge increase in cost for a performance hit.
The issue turned out to be the way the file system runs in the app service. It is NOT an issue with the database. The App Service architecture is just too slow at present with file read/writes, of which Wordpress uses a lot. Investigated the various file cache options but none improved enough.
Ended up setting up a fairly basic, and significantly cheaper, virtual machine, running with the same database and performance is hugely improved.
Not a great answer, but App Services are not up to Wordpress at present!
The comments below are correct. The "problem" is the database. You can either move MySQL to a Virtual Machine (which will give you a better performance) or you can also try to use cache plugins such as WP Super Cache as well decrease the number of requests.
You can find a full list of tips in the following link:
https://azure.microsoft.com/en-us/blog/10-ways-to-speed-up-your-wordpress-site-on-azure-websites/
PS: ignore the date, it's still relevant
I'm a bit of an amateur so I'm sure I've missed something.
I'm running Divi on Wordpress. When i go to update a page, I get the "Your updates couldn't be saved" error. My Wordpress site, as well as it's CPanel, also are loading unusually slowly, which I think is related to the issue. After working on this for a bit, both my site and it's CPanel will fail to load, giving me a "can't establish a secure connection to the server" error. The third symptom, which I can't make heads nor tails of, when I click "update" in the page editor, my browser will often (but not always) launch another tab/pop-up either displaying a preview of the edits or the "pages" page on the WP admin side. All of these issues are new (although I've had similar loading speed issues in the past with this site).
Thinking it may be an overload on my server (which happened due to an attack a few months ago), I let it sit for a few days with no luck. Then, thinking it may be a caching issue on my end, I changed my DNS servers, cleared my browser cache, tried private browsing, used my phone, used different wifi and cellular networks. All to no avail. I briefly had slight luck using my phone as a hotspot, but it only temporarily improved the loading speeds.
I also tried disabling plugins. I made sure everything was up to date. No help.
I went into my wp-config.php file and increased the memory limit to 128M and the WP-max memory limit to 256M. This helped briefly–I could update and save one page but when I tried to change the next, I was back to base 1. I've also increased the memory limits in my .htaccess file. I don't have access to my PHP.init file (there are often delays reaching my host so I'm trying to avoid relying on them when possible).
My last guess (which I have yet to implement) is to update my PHP. That said, I'm running 7.3.6 and had no issue updating the site a few days ago so I'm not sure that's the problem, unless divi's newest update has compatibility issues with 7.3 versions of PHP...
Any further ideas would be greatly appreciated! I'm partway through a cosmetic update (which, I know should be done on a staging site but sometimes best practices are best learnt through mistakes like this) so my site looks somewhat half-finished. That is, I'm anxious to be able to edit it again.
Many thanks in advance
Whenever you try to save something, Divi will make a request through admin-ajax.php, it often happens that a security firewall detects that as a threat (which is obviously not), thus giving you the failed save message. Can you ask you host to check the rules that are triggered and whitelist that action? It can also come from plugins like Wordfence, make sure to whitelist it there too.
You can also attach that layout as JSON here, I can test it on my own server and if I can save changes, we should be on the right path.
Cry for help here.
I have a fairly simple small wordpress site, but it takes forever to load. My hosting provider said, "The server itself isn't having an issue with any other users on it and seems to be operating fine and serving other sites without issue. Almost every time an issue like this exists, where the load average is going over 500, it is usually an issue within WordPress that is causing the server to have unnecessary strain, which brings sites down.".
My developers can not find any issues with their code, and suggest that it is an issue with the root level hosting or WP install, not the sub-domain (where we did the coding). So, I'm trying to find the issue and need someone's help...
this is my first post, so I apologize if it isn't in the right format. - Adam
A e-shop has developed using perstashop and put to the three server:
the first 2 is amazon, should be same setting
Server 1:
http://be-pure.com/en/women/3-slim-y-tank.html
Server 2:
http://52.77.216.83/en/women/3-slim-y-tank.html
the last one is just local hosting
Server 3:
http://internal001.zizsoft.com/be_pure/en/women/3-slim-y-tank.html
The problem is server 1 loading very slow compare to the other two server, but the performance should be the best among 3.
It looks as if server 1 hasn't cache the files
but in fact, all of them has
turn on smarty cache, using file system , with recomplie when modify
and
turn on the file system cache
Given that the code and server setting are the same, both 2 amazon server is same setting, and localhost one is other server, however it should be slower than server 1
1) How to debug/ check whether the file is using cache already?
(the cache file locate in cache/smarty and cache/cachefs in server)
2) And what takes the long load time for server 1? Just consider it as an PHP site, any ways to check why it is slow?
Thanks a lot for helping
Refer to the comments - I misinterpreted the data I was looking at earlier. It appears the server can only handle maybe 5-10 requests at a time so things get blocked until the other things are done loading. You likely just need to update your web server's configuration to handle more requests.
There is also a lot of JS data in the file. It is 318KB just to load the page and it has to do many requests to get JS/CSS files before it even gets to any of the HTML. So it is 318KB + all the external JS/CSS it needs to fetch (wow!). That's like 4MB of stuff just to load a page.
Check the modify timestamp on the files generated by your caching system to verify that caching is working.
Edit:
Since there is now a bounty out - please review the comment discussion we had. There is an issue where a traceroute doesn't make it to the server destination and I suspect that is related to the slowness but that type of network issue is over my head.
I am giving answer of your 2nd question.
I dont't know the exact problem of slow loading. but We had faced same issue in one of our projects in last month. Server was amazon.
One of our Instance was very slow. We had tried many solutions but none of them were working. Then We have found a solution which looks very unfair but it Worked for us.
We had Just restarted the slow Instance and We got success.
I hope this solution will work for u also.
All the Best :)
Answer to question 1:
You can use chromes developer tool, F12, and then network tab. It will show you all the files being downloaded, in the size column you can find if it's being loaded from cache or not.
Answer to question 2:
You can use chrome's YSlow plugin. it will help you a lot, but is obvious about your site you have too many files; css, js and lots of images; try to merge your css and js files and use image map for your images.
Hope you can solve your problem
No body can clear answer what is problem with servers. But you can find it with profiling. If you have budget i highly recommend to you buy a profiling tool "Tideways.io", "Blackfire" or "New Relic". I used New Relic, and it really help full to find bottlenecks. If you don't have budget to get a profiling tool you can use php profiling extension Xdebug. It is help full too, but reading profiling output of xdebug can be a bit difficult. But it's setup is really easy, and you can do partial profiling ( you can profile url which you want) with "xdebug profile trigger" instead profiling all requests.
Use this google solid tool to get insight about your page performance.
https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fbe-pure.com%2Fen%2Fwomen%2F3-slim-y-tank.html.
Also check if server1 is matching server2 configuration.
This tools shows me lots of suggestions for improvements for your website.
What is the best process for updating a live website?
I see that a lot of websites (e.g. StackOverflow) have warnings that there will be downtime for maintenance in advance. How is that usually coded in? Do they have a config value which determines whether to display such a message in the website header?
Also, what do you do if your localhost differs from the production server, and you need to make sure that everything works the same after you transfer? In my case, I set up development.mydomain.com (.htaccess authentication required), which has its own database and is basically my final staging area before uploading everything to the live production site. Is this a good approach to staging?
Lastly, is a simple SFTP upload the way to go? I've read a bit about some more complex methods like using server-side hooks in Git.. Not sure how this works exactly or whether it's the approach I should be taking.
Thanks very much for the enlightenment..
babonk
This is (approximately) how it's done on Google App Engine:
Each time you deploy an application, it is associated with a subdomain according to it's version:
version-1-0.example.com
version-1-1.example.com
while example.com is associated with one of the versions.
When you have new version of server-side software, you deploy it to version-2-0.example.com, and when you are sure to put it live, you associate example.com with it.
I don't know the details, because Google App Engine does that for me, I just set the current version.
Also, when SO or other big site has downtime, that is more probable to be a hardware issue, rather than software.
That will really depend on your website and the platform/technology for your website. For simple website, you just update the files with FTP or if the server is locally accessible, you just copy your new files over. If you website is hosted by some cloud service, then you have to follow whatever steps they offer to you to do it because a cloud based hosting service usually won’t let you to access the files directly. For complicated website that has a backend DB, it is not uncommon that whenever you update code, you have to update your database as well. In order to make sure both are updated at the same time, you will have to take you website down. To minimize the downtime, you will probably want to have a well tested update script to do the actual work. That way you can take down the site, run the script and fire it up again.
With PHP (and Apache, I assume), it's a lot easier than some other setups (having to restart processes, for example). Ideally, you'd have a system that knows to transfer just the files that have changed (i.e. rsync).
I use Springloops (http://www.springloops.com/v2/) to host my git repository and automatically deploy over [S/]FTP. Unless you have thousands of files, the deploy feels almost instantaneous.
If you really wanted to, you could have an .htaccess file (or equivalent) to redirect to a "under maintenance" page for the duration of the deploy. Unless you're averaging at least a few requests per second (or it's otherwise mission critical), you may not even need this step (don't prematurely optimize!).
If it were me, I'd have a an .htacess file that holds redirection instructions, and set it to only redirect during your maintenance hours. When you don't have an upcoming deploy, rename the file to ".htaccess.bak" or something. Then, in your PHP script:
<?php if (file_exists('/path/to/.htaccess')) : ?>
<h1 class="maintenance">Our site will be down for maintenance...</h1>
<?php endif; ?>
Then, to get REALLY fancy, setup a Springloops pre-deploy hook to make sure your maintenance redirect is setup, and a post-deploy hook to change it back on success.
Just some thoughts.
-Landon