PHP application - mocking / speeding up MySQL requests for local environment? - php

I'm not a back-end developer and I don't know much about devops, but I will try.
I've been working with Docker containers as a front-end developer with many applications, mostly PHP-based, Laravel, Phalcon, Symphony, you name it.
One things that has been bugging me for years now is the performance of local environment, especially when it comes to (any?) databases - it doesn't matter if I'm using remote MySQL database on a server or setup an MySQL container on localhost - requests are always so super slow compared to production, in worst cases reloading some pages took tens of seconds if not minutes.
I'm wondering whether is it possible for a simple front-end dev to setup mocked MySQL database somehow, like an object that will pretend to be my MySQL database with all the structure and data but much faster? 99% of the time I do not want to update records in my bases anyway, so something read-only would be totally sufficient. Of course I want this to work as a normal db, it means it can receive and send data.
I won't lie I had the same problem with almost every single startup I worked on (excepting... WordPress :P), not only in PHP but also in other languages, I'm using top-tier MacBook so it's performance is still far away from real web servers but I think it should be sufficient.
Thanks for any hints :)

Related

Zend Framework Multiple Database Stop Working with slow query

I'm running a big Zend Framework web application with 5 database, independent of each other, distributed on 2 database servers running on Mysql 5.6.36 - CentOS7 with 16gb ram 8 core processor each. However, if one of the 2 database servers stops responding because of slows query, the users on the other server cannot access the web application. The only way to turn on the application is to restart mysql on that server. I try different things without success. The strange thing is that if I turn off one of the servers the system continues to work correctly.
It's hard to offer a meaningful answer, because you've given us no information about your tables or your queries (in fact, you haven't asked a question at all, you've just told us a story! :-).
I will offer a guess that you are using MyISAM for one or more of your tables. This means a query from one client locks the table(s) it queries, and blocks concurrent updates from other clients.
To confirm you have this problem, use SHOW PROCESSLIST on each of your two database servers at the time you experience the contention between web apps. You might see a bunch of queries stuck waiting for a lock (it may appear in the processlist with the state of "Updating").
If so, you might have better luck if you alter your tables' storage engine to InnoDB. See https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html

MySQL failover & PHP

We run a fairly busy website, and currently it runs on a traditional one server LAMP stack.
The site has lots of legacy code, and the database is very big (approx 50GB when gzipped, so probably 4 or 5 times that..)
Unfortunately, the site is very fragile and although I'm quite confident load balancing servers with just one database backend I'm at a bit of a loss with replication and that sort of thing.
There's a lot of data written and read to the database at all times - I think we can probably failover to a slave MySQL database fairly easily, but I'm confused about what needs to happen when the master comes back online (if a master/slave setup is suitable...) does the master pick up any written rows when it comes back up from the slave or does something else have to happen?
Is there a standard way of making PHP decide whether to use a master or slave database?
Perhaps someone can point me in the way of a good blog post that can guide me?
Thanks,
John
If you are trying to create a failover solution for your entire website, I found this article interesting. It talks about creating a clone of the mySql database and keeping them in sync with rsync.
A simpler solution would be to just backup your database periodically with a script that runs from a cron job. Also set up a static web page failover solution. This website has an article on setting that up. That's the way we do it. This way - if your database has issues, you can restore it using on of your backups, while you failover to your temporary static page.

PHP/MySQL Performance Testing with Just PHP

I'm trying to diagnose a server where the website is loading very slowly, but unfortunately my client has only provided me with FTP access.
I've got FTP access so I can upload PHP scripts, but can't set up any other server side tools.
I have access to phpMyAdmin, but not direct access to the MySQL server. It is also unfortunately a Windows server (and we've been a Linux shop for over a decade now).
So, if I wan to evaluate MySQL & disk speed performance through PHP on a generic server, what is the best way to do this?
There are already tools like:
https://github.com/raphaelm/php-benchmark or https://github.com/InfinitySoft/php-benchmark
But I'm surprised there isn't something that someone has already set up & configured to just run through and do some basic testing of a server's responsiveness.
Every time we evaluate a new server environment it's handy to be able to compare it to an existing one quickly to see if there are any anomalies. I guess I'd just hoped that someone else had written up a script to do this already. I know I have, but that was before Github when there was a handy place to post scraps of code like this.
You've probably already done this, but just in case... If I were in your shoes, the first thing I'd be looking at are the indexes on the mysql tables and the queries in the application. I've seen some sites get huge speed boosts just by fixing a join or adding a missing index.
Don't forget to check the code for performance issues or calls to sleep(). If you haven't yet, it may be helpful to get the code running locally so you can run it through xdebug.

Python sync with mySQL for local application?

There have been many questions along these lines but I'm struggling to apply them to my scenario. Any help would be be greatly appreciated!
We currently have a functioning mySQL database hosted on a website, data is entered from a website and via PHP it is put into the database.
At the same time we want to now create a python application that works offline. It should carry out all the same functions as the web version and run totally locally, this means it needs a copy of the entire database to run locally and when changes are made to such local database they are synced next time there is an internet connection available.
First off I have no idea what the best method would be to run such a database offline. I was considering just setting up a localhost, however this needs to be distributable to many machines. Hence setting up a localhost via an installer of some sort may be impractical no?
Secondly synchronization? Not a clue on how to go about this!
Any help would be very very very appreciated.
Thank you!
For binding Python to MySql you could use HTSQL:
http://htsql.org
You can then also query your MySQL DB via http requests, either from AJAX calls or server-side e.g. cURL (and of course still have the option of writing standard SQL queries).
There is a JQuery plugin called HTRAF that handles the client side AJAX calls to the HTSQL server.
The HTSQL server runs on localhost as well.
What OS would you be using?
How high-performance does your local application need to be? Also, how reliable is the locally available internet connection? If you don't need extremely high performance, why not just leave the data in the remote MySQL server?
If you're sure you need access to local data I'd look at MySQL's built-in replication for synchronization. It's really simple to setup/use and you could use it to maintain a local read-only copy of the remote database for quick data access. You'd simply build into your application the ability to perform write queries on the remote server and do read queries against the local DB. The lag time between the two servers is generally very low ... like on the order of milliseconds ... but you do still have to contend with network congestion preventing a local slave database from being perfectly in-sync with the master instantaneously.
As for the python side of things, google mysql-python because you'll need a python mysql binding to work with a MySQL database. Finally, I'd highly recommend SQLalchemy as an ORM with python because it'll make your life a heck of a lot easier.
I would say an ideal solution, however, would be to set up a remote REST API web service and use that in place of directly accessing the database. Of course, you may not have the in-house capabilities, the time or the inclination to do that ... which is also okay :)
Are you planning to run mysql on your local python offline apps ? I would suggest something like sqlite. As for keeping things in sync, it also depends on the type of data that needs to be synchronized. One question that needs to be answered:
Are the data generated by these python apps something that is opague ? If yes (i.e. it doesn't have any relations to other entities), then you can queue the data locally and push it up to the centrally hosted website.

Does a separate MySQL server make sense when using Nginx instead of Apache?

Consider a web app in which a call to the app consists of PHP script running several MySQL queries, some of them memcached.
The PHP does not do very complex job. It is mainly serving the MySQL data with some formatting.
In the past it used to be recommended to put MySQL and the app engine (PHP/Apache) on separate boxes.
However, when the data can be divided horizontally (for example when there are ten different customers using the service and it is possible to divide the data per customer) and when Nginx +FastCGI is used instead of heavier Apache, doesn't it make sense to put Nginx Memcache and MySQL on the same box? Then when more customers come, add similar boxes?
Background: We are moving to Amazon Ec2. And a separate box for MySQL and app server means double EBS volumes (needed on app servers to keep the code persistent as it changes often). Also if something happens to the database box, more customers will fail.
Clarification: Currently the app is running with LAMP on a single server (before moving to EC2).
If your application architecture is already designed to support Nginx and MySQL on separate instances, you may want to host all your services on the same instance until you receive enough traffic that justifies the separation.
In general, creating new identical instances with the full stack (Nginx + Your Application + MySQL) will make your setup much more difficult to maintain. Think about taking backups, releasing application updates, patching the database engine, updating the database schema, generating reports on all your clients, etc. If you opt for this method, you would really need to find some big advantages in order to offset all the disadvantages.
You need to measure carefully how much memory overhead everything has - I can't see enginex vs Apache making much difference, it's PHP which will use all the RAM (this in turn depends on how many processes the web server chooses to run, but that's more of a tuning issue).
Personally I'd stay away from enginex on the grounds that it is too risky to run such a weird server in production.
Databases always need lots of ram, and the only way you can sensibly tune the memory buffers is to have them on dedicated servers. This is assuming you have big data.
If you have very small data, you could keep it on the same box.
Likewise, memcached makes almost no sense if you're not running it on dedicated boxes. Taking memory from MySQL to give to memcached is really robbing Peter to pay Paul. MySQL can cache stuff in its innodb_buffer_pool quite efficiently (This saves IO, but may end up using more CPU as you won't cache presentation logic etc, which may be possible with memcached).
Memcached is only sensible if you're running it on dedicated boxes with lots of ram; it is also only sensible if you don't have enough grunt in your db servers to serve the read-workload of your app. Think about this before deploying it.
If your application is able to work with PHP and MySQL on different servers (I don't see why this wouldn't work, actually), then, it'll also work with PHP and MySQL on the same server.
The real question is : will your servers be able to handle the load of both Apache/nginx/PHP, MySQL, and memcached ?
And there is only one way to answer that question : you have to test in a "real" "production" configuration, to determine own loaded your servers are -- or use some tool like ab, siege, or OpenSTA to "simulate" that load.
If there is not too much load with everything on the same server... Well, go with it, if it makes the hosting of your application cheapier ;-)

Categories