Magento some product suddenly disappear - php

We have Magento EE 1.14.0.1. recently we moved to new AWS EC2 server and ElasticCache Redis server. then some random products start disappearing in the frontend. They exist on backend and configured correctly ( visible , enabled , in stock , etc .... ). And only after you save the product in backend it will show up again in frontend even without flushing any cache.
Is this issue related to Redis cache ?
and if its how to fix it ?
Any input would be appreciated to direct me to a solution.
Thanks
Update: I marked everything under Index Management to Update on Save. so I revert that back to update on schedule. and I think that fixed the issue. but still I want to keep my store inventory up to date.

"It's an index issue, every time you update data (product, stock) from database, you have to manually re-index Magento."
That is true for Community Edition, not Enterprise Edition. In addition, there can be some extra issues when migrating to AWS. After 4 months of troubleshooting on an inherited server migrated into AWS I found a number of issues/solutions.
EE issues
Enterprise Edition indexing is asynchronous for many of the indexes. In addition, not all EE indexes are configured in the typical place.
On the Admin menu, select System > Configuration. In the panel on the left, under Advanced, select Index Management.
http://docs.magento.com/m1/ee/user_guide/system-operations/index-configuration.html
Even when set to "update on save" in my experience it frequently does not update on save.
AsyncIndexing was unstable in versions prior to 1.14.3.x .
Upgrade! It was possible for the partial process to break in such a way as to make it impossible for indexing to proceed. One way this will occur is if you are running PHP for the website[typically via PHPFPM] with a different userid and group then you run the cronjobs[shell access]. Indexing depends on the creation of a file to 'lock' the process - the file may only be written/deleted by the user which creates it.
I have found that for performance reasons, it is best to set ALL indexes to "update manually". Do not schedule a periodic reindex all process, it is useless due to async indexing. Just make sure your cron is running and everything should be fine.
The AsyncIndex process uses MySQL triggers...which have an issue when trying to migrate a magento database from one server to another. The way they are created initially, they can ONLY be used by the database user that when the triggers where first created. If you change the database user for the new server, the triggers will not migrate. Even worse, there is almost no indication that this occurred and everything except indexing runs perfectly so how can you tell?
Lastly, "reindex all" does not always reindex all. Thanks to various posts on the internet, I created a shell script to make Magento think all the products were updated and the index needs to be rebuilt:
https://gist.github.com/gamort/5dc5e16bdec00a8bb3b922fc463af17c
AWS issues
Using AWS Elasticache Redis has a hidden gotcha - the default zone it is launched in may be different then your server zone. In my case, the server was in USEAST-1a while Redis defaulted to USEAST-1b. This resulted in occasional timeouts when looking up data from the cache. While the website code can usually recover, the indexing code does not. This leads to the index cron process being in a broken state.
Almost as importantly, you will pay a trivial amount per GB for data transfer from zone 1a to 1b. But when your cache is working, this "trivial" amount can amount to a lot! We had a recurring $10+/day [$500-$600 a month] intrazone data transfer fee! Launch a new redis server in your actual zone, use the redis cli on your web server to make sure you can connect[we had firewall configuration issues] and then only THEN update your configuration.
AWS RDS server also have a hidden gotcha[hope your not too overwhelmed yet]. Migrating the database from another server to Amazon RDS has issues where there was an extremely slight change in what MySQL considers valid SQL for a specific function...which Magento EE just happens to use. :-) . I ended up installing a new copy of Magento EE and using Navicat to sync the database structures.
Solr issues
Suffice to say, there are Solr issues as well. Mostly due to the schema, but I also found that wiping the solr database and letting it reindex helped.
Magento Rewrite/Form issues
This issue occurs when you upgrade to 1.14.3 - which of course you should do since it fixes so many indexing issues. Version 1.14.3.x added form keys to a number of forms, including the customer sign up form. So if you created your own custom phtml templates for the logon they will not work! You need to add that form key field to your customization. Not a big deal though, since you documented what template file you copied it from initially right?
All in all, I would estimate going through the checklist for migration to be a good 20 hours, and possibly up to 80 depending on what issues you run into. And at the end of the day, since the fixes are mainly in cron jobs which are not easily visible the website owner will be hard pressed to tell how they benefitted from all that work. In my case, disappearing products had already been an issue for over a year before we inherited the site the client was understanding about the difficulties.

It's an index issue, every time you update data (product, stock) from database, you have to manually re-index Magento. If you don't do that, you'll have corrupted data on index and you'll lose SQL join on product request list.

Related

Horrible performance on Azure App Service - Wordpress

Evening All,
At by absolute wits end and hoping someone may be able to save me! I am in the process of migrating a number of PHP applications into Azure. I am using:
Linux based App Service running PHP 7.4 (2 vCPUs, 8Gb RAM) at a cost of £94 a month.
Azure Database on MySQL 8.0 (2 vCPUs) at £114 a month.
My PHP apps run well, decent load time of under 1 second per page. Wordpress performance however is awful. I am going from a 1 second page load to around 10 seconds, particularly on the back end. I have read all of the Azure guides and have implemented the following obvious points:
Both the App Service and the MySQL install are in the same data center
App Service is set to 'Always On'
Connection Redirection is set to Preferred and tested as working
The same app runs fine on a very basic £10 or so a month shared hosting package. I have also tried the same setup in Amazon Web Services today and page load is back to a second or so.
In Chrome Console, the delay is in TTFB. I have disabled all the plugins and none stand out as making a huge difference. Each adds a second or so page load, suggesting to me a consistent issue when a page requires a number of database calls.
What is going on with Azure and the awful Wordpress performance?! Is there anything else I can investigate or try? Really keen to stay with Azure but can't cope with the huge increase in cost for a performance hit.
The issue turned out to be the way the file system runs in the app service. It is NOT an issue with the database. The App Service architecture is just too slow at present with file read/writes, of which Wordpress uses a lot. Investigated the various file cache options but none improved enough.
Ended up setting up a fairly basic, and significantly cheaper, virtual machine, running with the same database and performance is hugely improved.
Not a great answer, but App Services are not up to Wordpress at present!
The comments below are correct. The "problem" is the database. You can either move MySQL to a Virtual Machine (which will give you a better performance) or you can also try to use cache plugins such as WP Super Cache as well decrease the number of requests.
You can find a full list of tips in the following link:
https://azure.microsoft.com/en-us/blog/10-ways-to-speed-up-your-wordpress-site-on-azure-websites/
PS: ignore the date, it's still relevant

How to approach a long time process in PHP?

I am working on a PHP product application, which will be deployed on several client servers (say 500+). What my client's requirement is, when a new feature released in this product, we need to push all the changes ( files & db ) to clients server using FTP/SFTP. I am a bit concerned about transferring files using FTP to 500+ server at a time. I am not sure how handle this kind of situation. I have came up with some ideas, like
When user (product admin) click update, we send an ajax request, which will update first 10 servers, and returns with the remaining count of servers. From the ajax response, we will send again for next 10, and so on.
OR
Create a cron, which runs every 1 mins, which check whether any update is active, and update the first 10 active servers. Once it complete the transfer for a server, it changes the status for that server to 0.
I just want know, is there any other method to do these kind of tasks ?
Add the whole code to a code repository mechanism like Git and then push all over present files to the created repository. Go to any one server and write a cron for auto pull the repository to the severs and upload that cron to every server.
In future if you like to add new feature just pull the whole repository and add the feature. Push the code again to repository it will be pulled by cron automatically in all the server where you kept the cron previously.
First, I would like to provide some suggestions and insight into your suggested methods:
In both the methods, you'll have to keep a list of all the servers where your application has been deployed and have keep track whether the update has been applied to a particular one or not. That can be difficult if in future you want to scale from 500+ to say 50,000+.
You are also not considering the case, where the target server might not be functioning at the time you send the request for update.
I suggest, instead of sending the update from your end to target server, you achieve the same in the opposite direction. As you said, you are developing an entire PHP Application to be deployed on Client Server, I suggest you develop an Update Module into it. The Target Server can send a request to your servers at designated time to check whether there is any Update available or not. If there is, then I suggest following two ways to proceed further
You send an update list, providing the names and paths of files to be updated, along with any DB changes, the same can be processed on Client Side accordingly.
You can just send a response saying there is an update available, then a separate process launches on Client Server which will download all the files and DB changes from the Server.
For maintaining concurrency of updates you can implement a Token System, or can rely on the Time-stamp at which the update happened.

Lock write to database during maintenance wordpress mysql

I'm doing a manual WordPress update.
I backed up the database to a .sql file.
Is there any way to prevent write to database temporarily while still allowing read to the whole MySQL database?
This is to ensure
The backed up database is up to date
Users are still able to browse contents on my website without disruption (I will put up a maintenance notice that your posts will not be saved etc.)
Update
The upgrading activity is only used as an example here.
I'm planning to make some changes directly to the database as well and it will take a while.
I'm sure I have seen websites (famous one) showing that they were under maintenance and that my comments/ posts would not be recorded (no write) but I was still able to browse their websites (read is ok).
I thought it was quite a reasonable need, was it not?
I'm sure there must be a way to
Serve the caches of webpages (server-side) to users while not connecting the the database at all in up to a couple of hours (logging in/ registration will not available to users but that's ok)? How do you think I can achieve that then?
To lock out any visitors from posting comments and such, you can go into phpmyadmin, find the wordpress user account (DB_USER as defined in wp-config.php) and revoke its insert, edit and update privileges. But this will probably not degrade gracefully into user friendly error messages when they still try anyways ignoring your messages. And if you are doing an update through php script in the way the wordpress installs itself it may still need these privledges to make any necessary modifications like adding options to the options table.
What version are you upgrading from and to? I know the install took me all of 30 seconds, and I can't imagine there would be a huge change in database between updates. But then again WP can be highly customized and I don't know the extent your site deviates from the standard install.
Sorry I'm about to go to sleep so I won't go into depth but one way that works for sure is if you have two MySQL users, 1 for your website to read/write from and another to do your maintenance. Your first user, you can change his privileges temporarily to read only and that should work.

Website slow response (all other users) when large MySQL query running

This may seem like an obvious question but we have a PHP/MySQL app that runs on Windows 2008 server. The server has about 10 different sites running from it in total. Admin options on the site in question allow an administrator to run reports (through the site) which are huge and can take about 10mins in some cases. These reports are huge mysql queries that display the data on screen. When these reports are running the entire site goes slow for all users. So my questions are:
Is there a simple way to allocate server resources so if a (website) administrator runs reports, other users can still access the site without performance issues?
Even though running the report kills the website for all users of that site, it doesn't affect other sites on the same server. Why is that?
As mentioned, the report can take about 10 minutes to generate - is
it bad practice to make these kinds of reports available on the
website? Would these typically be generated by overnight scheduled tasks?
Many thanks in advance.
The load your putting on the server will most likely have nothing to do with the applications but the mysql table that you are probably slamming. Most people get around this by generating reports in down time or using mysql replication to have a second database which is used purely for reporting.
I recommend trying to get some server monitoring to see what is actually going on. I think Newrelic just released windows versions of its platform and you can try it out for free for 30 days i think.
There's the LOW_PRIORITY flag, but I'm not sure whether that would have any positive effect, since it's most likely a table / row locking issue that you're experiencing. You can get an idea of what's going on by using the SHOW PROCESSLIST; query.
If other websites run fine, it's even more likely that this is due to database locks (causing your web processes to wait for the lock to get released).
Lastly, it's always advisable to run big reporting queries overnight (or when the server load is minimal). Having a read replicated slave would also help.
I strongly suggest you install a replicated MySQL server, then running large administrator queries (SELECT only naturally) on it, to avoid the burden of having your website blocked!
If there's not too much transaction per second, you could even run the replica on a desktop computer remotely from your production server, and thus have a backup off-site of your DB!
Are 100% sure you have added all necessary indexes?
You need to have a insanely large website to have this kinds of problems unless you are missing indexes.
Make sure you have the right indexing and make sure you do not have connection fields of varchar, not very fast.
I have a database with quite a few large tables and millions of records that is working 24/7.
Has loads of activity and automated services processing it without issues due to proper indexing.

Magento products not showing up consistently in frontend

I have Magento site running with 20000 plus products. Sometimes it does not show the products in the frontend. It says "There are no products matching the selection", but the products are still there in the backend site.
I know I have to run re-indexing process, and whenever I complete the re-indexing process all products are there in the frontend.
So now, my question is: Why this is happening again and again? This is now fourth time I faced this problem. I want to know the real causes of this issue. I am very afraid.
Thanks
The two most likely cuplrits are caching and indexing problems (unless of course you are using a clustered database, in which case that is probably the culprit). If it's feasible on your site (or on a dev environment, which I am sure you had the foresight to create), disable Magento's caching temporarily and see if that alleviates the issue. Also try disabling the flat_catalog settings to see if that is having an effect.
Also make sure that your browser cache is set to always refresh from the server.
Hope that helps!
Thanks,
Joe
It sounds like you need to setup your cronjobs to re-run the indexes. Certainly with prices, the custom price indexes are only valid for a set period of time, then the cronjob extends those periods if the Catalog Price Rule is still active.
Here is a wiki post on the process of setting up your cronjobs.
Note that cron itself can cause problems, so as Joseph suggests, make sure you have a dev and staging environment setup that mirrors production so that you can check the configuration.

Categories