MySQL + PHP - Results different on different computers? Cache? - php

We have been running a local application for a bit over a year now and this problem has never come up.
We have a section of our application where a user will add a note to an account. Very simple table structure includes an account_id, user_id (who added the note) and a date.
The notes are still being added into the database just fine; however, when viewing the page, it is hit or miss on whether or not the note will actually display using PHP to query the database.
The only way I can get these notes to show up is by clearing the computer cache (again, doesn't always work) or clearing MySQL cache (which doesn't always work, either)
The application is being accessed from outside of our local network, as well, and this issue does not seem to happen from anybody using this method.
I think it is a cache issue, but I am honestly not familiar with this and have not run into this issue with anything before. Server is latest version of CentOS and we are running very simply MySQL queries via PHP.
Thanks, in advance, for any assistance you can provide.

Related

Find intensive queries/pages sitewide with laravel

My website is build with Laravel and for the most part, works perfectly fine. However, there is an occasional issue where the website comes to a complete stop and page loading takes almost 5+ minutes. I have had this issue previously and it was because there was page on the website that was performing a loop of thousands and thousands of queries depending on what the user inputted into a form. If the user entered the number 5,000 then the page would perform 5,000 queries. Once I fixed that then everything was working perfectly again. I have my suspicion that something similar has happened again however I'm having trouble pinpointing exactly what could be causing it.
Is there anything that I can do, site-wise, to monitor this and help me locate the issue? Perhaps there is something that can be done on the operating system itself (I'm using Ubuntu)? It would be great if there was some monitoring system that allowed me to see which pages took the longest to complete all of its queries to the database.
Thank you.
If you want to track all queries/events/errors for all users with traces etc. then maybe
Laravel Telescope
is what you want. It is an UI for inspecting and debugging everything in Laravel. If your Laravel version is above 5.7.7 you can install it right away

Magento some product suddenly disappear

We have Magento EE 1.14.0.1. recently we moved to new AWS EC2 server and ElasticCache Redis server. then some random products start disappearing in the frontend. They exist on backend and configured correctly ( visible , enabled , in stock , etc .... ). And only after you save the product in backend it will show up again in frontend even without flushing any cache.
Is this issue related to Redis cache ?
and if its how to fix it ?
Any input would be appreciated to direct me to a solution.
Thanks
Update: I marked everything under Index Management to Update on Save. so I revert that back to update on schedule. and I think that fixed the issue. but still I want to keep my store inventory up to date.
"It's an index issue, every time you update data (product, stock) from database, you have to manually re-index Magento."
That is true for Community Edition, not Enterprise Edition. In addition, there can be some extra issues when migrating to AWS. After 4 months of troubleshooting on an inherited server migrated into AWS I found a number of issues/solutions.
EE issues
Enterprise Edition indexing is asynchronous for many of the indexes. In addition, not all EE indexes are configured in the typical place.
On the Admin menu, select System > Configuration. In the panel on the left, under Advanced, select Index Management.
http://docs.magento.com/m1/ee/user_guide/system-operations/index-configuration.html
Even when set to "update on save" in my experience it frequently does not update on save.
AsyncIndexing was unstable in versions prior to 1.14.3.x .
Upgrade! It was possible for the partial process to break in such a way as to make it impossible for indexing to proceed. One way this will occur is if you are running PHP for the website[typically via PHPFPM] with a different userid and group then you run the cronjobs[shell access]. Indexing depends on the creation of a file to 'lock' the process - the file may only be written/deleted by the user which creates it.
I have found that for performance reasons, it is best to set ALL indexes to "update manually". Do not schedule a periodic reindex all process, it is useless due to async indexing. Just make sure your cron is running and everything should be fine.
The AsyncIndex process uses MySQL triggers...which have an issue when trying to migrate a magento database from one server to another. The way they are created initially, they can ONLY be used by the database user that when the triggers where first created. If you change the database user for the new server, the triggers will not migrate. Even worse, there is almost no indication that this occurred and everything except indexing runs perfectly so how can you tell?
Lastly, "reindex all" does not always reindex all. Thanks to various posts on the internet, I created a shell script to make Magento think all the products were updated and the index needs to be rebuilt:
https://gist.github.com/gamort/5dc5e16bdec00a8bb3b922fc463af17c
AWS issues
Using AWS Elasticache Redis has a hidden gotcha - the default zone it is launched in may be different then your server zone. In my case, the server was in USEAST-1a while Redis defaulted to USEAST-1b. This resulted in occasional timeouts when looking up data from the cache. While the website code can usually recover, the indexing code does not. This leads to the index cron process being in a broken state.
Almost as importantly, you will pay a trivial amount per GB for data transfer from zone 1a to 1b. But when your cache is working, this "trivial" amount can amount to a lot! We had a recurring $10+/day [$500-$600 a month] intrazone data transfer fee! Launch a new redis server in your actual zone, use the redis cli on your web server to make sure you can connect[we had firewall configuration issues] and then only THEN update your configuration.
AWS RDS server also have a hidden gotcha[hope your not too overwhelmed yet]. Migrating the database from another server to Amazon RDS has issues where there was an extremely slight change in what MySQL considers valid SQL for a specific function...which Magento EE just happens to use. :-) . I ended up installing a new copy of Magento EE and using Navicat to sync the database structures.
Solr issues
Suffice to say, there are Solr issues as well. Mostly due to the schema, but I also found that wiping the solr database and letting it reindex helped.
Magento Rewrite/Form issues
This issue occurs when you upgrade to 1.14.3 - which of course you should do since it fixes so many indexing issues. Version 1.14.3.x added form keys to a number of forms, including the customer sign up form. So if you created your own custom phtml templates for the logon they will not work! You need to add that form key field to your customization. Not a big deal though, since you documented what template file you copied it from initially right?
All in all, I would estimate going through the checklist for migration to be a good 20 hours, and possibly up to 80 depending on what issues you run into. And at the end of the day, since the fixes are mainly in cron jobs which are not easily visible the website owner will be hard pressed to tell how they benefitted from all that work. In my case, disappearing products had already been an issue for over a year before we inherited the site the client was understanding about the difficulties.
It's an index issue, every time you update data (product, stock) from database, you have to manually re-index Magento. If you don't do that, you'll have corrupted data on index and you'll lose SQL join on product request list.

Migrating Wordpress servers, same domain.. Few questions

I'm helping a friend migrate her wordpress server to GoDaddy, and I think I may have bitten off more than I can chew... I've never migrated a wordpress before. This page here is the Wordpress wiki for moving Wordpress when your domain isn't changing. It doesn't seem to complex, but I'm terrified of accidentally ruining this website and I don't understand a couple of things on the wiki.
The Wiki says
If database and URL remains the same, you can move by just copying your files and database.
Does this mean that I can just log in to her server from Filezilla and copy all of the files on the server? What does database mean, is that something separate from the files on the server?
If database name or user changes, edit wp-config.php to have the correct values.
This sort of goes with my first question.. What initiates a database name or user change?
Apologies for my ignorance, but after an hour or so of searching around for these answers I'm left just as confused.
Last but not least, is there anything else I should be aware of when migrating a wordpress? I'm a little nervous..
You are going to need to migrate you instalation in two parts.
Part 1 you already eluded to. You will need to copy the files from one server to another. I am guessing you know how to do this so I will not dive any deeper into it. If you do need more explanation, please let me know and I will edit the question.
Part 2 is what you mentioned but said you did not understand. Copying the database of wp install. Wordpress runs off of PHP and MySQL. The "files" part in part 1 is the PHP files (along with some html and css). You need to log into his MySQL server and do an export of his database. You should be able to export the database (How to export mysql database to another computer?) and import it into his new server on GoDaddy. (Error importing SQL dump into MySQL: Unknown database / Can't create database).
Just take things slow, follow the guides that I have linked and do not delete anything from the first server until everything is working on the second. Please let me know if you do not understand anything.
if you don't feel confortable with database exports and imports, try using plugins like:
http://wordpress.org/plugins/duplicator/
or
http://wordpress.org/plugins/wordpress-move/
Check his docs for info.
Luck!
• A database is literally a data base. It's where websites (and other applications) store their data eg. For Wordpress, it would be data such as posts, user information etc.
If you are using a cPanel setup then you would need to get access to it and navigate to phpMyAdmin which is the GUI for managing a database.
Now I'm not sure what type of setup you're using but that should be a start.
• A database has a connection server address (usually localhost), a database name, username and password. These are setup at the time of setting up a database.
When migrating servers, you would need to update those details in the wp-config.php file (I think around line 19 or so).
• The annoying part about migrating Wordpress to another server is the domain change as you have to update the old domain with the new domain throughout the database. However since you're not changing domain names, it should be a smooth ride as long as the new server supports PHP and has a database.

Lock write to database during maintenance wordpress mysql

I'm doing a manual WordPress update.
I backed up the database to a .sql file.
Is there any way to prevent write to database temporarily while still allowing read to the whole MySQL database?
This is to ensure
The backed up database is up to date
Users are still able to browse contents on my website without disruption (I will put up a maintenance notice that your posts will not be saved etc.)
Update
The upgrading activity is only used as an example here.
I'm planning to make some changes directly to the database as well and it will take a while.
I'm sure I have seen websites (famous one) showing that they were under maintenance and that my comments/ posts would not be recorded (no write) but I was still able to browse their websites (read is ok).
I thought it was quite a reasonable need, was it not?
I'm sure there must be a way to
Serve the caches of webpages (server-side) to users while not connecting the the database at all in up to a couple of hours (logging in/ registration will not available to users but that's ok)? How do you think I can achieve that then?
To lock out any visitors from posting comments and such, you can go into phpmyadmin, find the wordpress user account (DB_USER as defined in wp-config.php) and revoke its insert, edit and update privileges. But this will probably not degrade gracefully into user friendly error messages when they still try anyways ignoring your messages. And if you are doing an update through php script in the way the wordpress installs itself it may still need these privledges to make any necessary modifications like adding options to the options table.
What version are you upgrading from and to? I know the install took me all of 30 seconds, and I can't imagine there would be a huge change in database between updates. But then again WP can be highly customized and I don't know the extent your site deviates from the standard install.
Sorry I'm about to go to sleep so I won't go into depth but one way that works for sure is if you have two MySQL users, 1 for your website to read/write from and another to do your maintenance. Your first user, you can change his privileges temporarily to read only and that should work.

Two-way mySQL database sync between hosted and local production server

So the scenario is this:
I have a mySQL database on a local server running on Windows 2008 Server. The server is only meant to be accessible to users on our network and contains our companies production schedule information. I have what is essentially the same database running on a hosted server running linux, which is meant to be accessible online so our customers can connect to it and update their orders.
What I want to do is a two-way sync of two tables in the database so that the orders are current in both databases, and a one-way sync from our server to the hosted one with the data in the other tables. The front end to the database is written in PHP. I will say what I am working with so far, and I would appreciate if people could let me know if I am on the right track or barking up the wrong tree, and hopefully point me in the right direction.
My first idea is to make (at the end of the PHP scripts that generate changes to the orders tables) an export of the changes that have been made, perhaps using INSERT into OUTFILE WHERE account = account or something similar. This would keep the size of the file small rather than exporting the entire orders table. What I am hung up on is how to (A) export this as an SQL file rather than a CSV (B) how to include the information about what has been deleted as well as what has been inserted (C) how to fetch this file on the other server and execute the SQL statement.
I am looking into SSH and PowerShell currently but can't seem to formulate a solid vision of exactly how this will work. I am looking into cron jobs and Windows scheduled tasks as well. However, it would be best if somehow the updates simply occurred whenever there was a change rather than on a schedule to keep them synced in real time, but I can't quite figure that one out. I'd want to be running the scheduled task/cronjob at least once every few minutes, though I guess all it would need to be doing is checking if there were any dump files that needed to be put onto the opposing server, not necessarily syncing anything if nothing had changed.
Has anyone ever done something like this? We are talking about changing/adding/removing from 1(min) to 160 lines(max) in the tables at a time. I'd love to hear people's thoughts about this whole thing as I continue researching my options. Thanks.
Also, just to clarify, I'm not sure if one of these is really a master or a slave. There isn't one that's always the accurate data, it's more the most recent data that needs to be in both.
+1 More Note
Another thing I am thinking about now is to add at the end of the order updating script on one side another config/connect script pointing to the other servers database, and then rerun the exact same queries, since they have identical structures. Now that just sounds to easy.... Thoughts?
You may not be aware that MySQL itself can be configured with databases on separate servers that opportunistically sync to each other. See here for some details; also, search around for MySQL ring replication. The setup is slightly brittle and will require you to learn a bit about MySQL replication. Or you can build a cluster; much higher learning curve but less brittle.
If you really want to roll it yourself, you have quite an adventure in design ahead of you. The biggest problem you have to solve is not how to make it work, it's how to make it work correctly after one of the servers goes down for an hour or your DSL modem melts or a hard drive fills up or...
Start a query on a local and a remote server can be a problem if the connection breaks. It is better to each query locally stored in the file, such as GG-MM-DD-HH.sql, and then send the data every hour, when the hour expired. Update period can be reduced to 5 minutes for example.
In this way, if the connection breaks, the re-establishment take on all the left over files.
At the end of the file insert CRC for checking content.

Categories