Related
I'm attempting to write a search functionality for a website, and I've decided upon an approach of using MySQL temporary tables to handle the data input, via the query below:
CREATE TEMPORARY TABLE `patternmatch`
(`pattern` VARCHAR(".strlen($queryLengthHere)."))
INSERT INTO `patternmatch` VALUES ".$someValues
Where $someValues is a set of data with the layout ('some', 'search', 'query') - or basically what the user searched. I then search my main table images based on the data within table patternmatch like so:
SELECT images.* FROM images JOIN patternmatch ON (images.name LIKE patternmatch.pattern)
I then apply a heuristic or scoring system based on how well each result matched the input and display the results by that heuristic etc.
What I'm wondering is how much overhead does creating a temporary table require? I understand that they only exist in session, and are dropped as soon as the session is ended, but if I have hundreds of thousands of searches per second, what sort of performance issues might I encounter? Is there any better way of implementing a search functionality?
What you stated is totally correct, the temporary table will only be visible to the current user/connection. Still, there is some overhead and some other problems such as:
For each of the thousands of searches you are going to create and fill that table (and drop it later) - not per user, per search. Because each search most likely will re-execute the script, and "per session" does not mean PHP session - it means database session (open connection).
You will need the CREATE TEMPORARY TABLES privilege, which you might not have.
Still, that table really should have MEMORY type, which steals your RAM more than it looks like. Because even having VARCHAR, MEMORY tables use fixed length row-storage.
If your heuristics later need to refer to that table twice (like SELECT xyz FROM patternmatch AS pm1, patternmatch AS pm2 ...) - this is not possible with MEMORY tables.
Next, it would be easier for you - and also for the database - to add the LIKE '%xyz%' directly to your images tables WHERE clause. It will do the same without the overhead of creating a TEMP TABLE and joining it.
In any case - no matter which way you go - that WHERE will be horribly slow. Even if you add an index on images.name you most likely will need LIKE '%xyz%' instead of LIKE 'xyz%', so that index will not get used.
I'm asking whether a session-specific temporary table to handle the search input by the user (created on a search, dropped on the end of a session) is an appropriate way of handling a search functionality.
No. :)
Alternative options
MySQL has a build-in Fulltext-Search (since 5.6 also for InnoDB) that even can give you that scoring: I highly recommend giving it a read and a try. You can be sure that the database knows better than you how to do that search efficiently.
If you are going to use MyISAM instead of InnoDB, be aware of the often overlooked limitation that FULLTEXT searches only return anything if the number of results is less than 50% of the total table rows.
Other things that you might want to look at, are for example Solr (Nice introduction read to that topic itself would be the beginning of http://en.wikipedia.org/wiki/Apache_Solr ). We are using it in our company and it does a great job, but it requires quite some learning.
Summary
The solution to your current problem itself (the search) is to use the FULLTEXT capabilities.
If I have hundreds of thousands of searches per second, what sort of performance issues might I encounter? Is there any better way of implementing a search functionality?
To give you a number, 10.000 calls per second is not "trivial" already - with hundreds of thousands of searches per second the sort of performance issues you will encounter are everywhere in your set-up. You are going to need a couple of servers, load balancing and tons of other amazing tech crap. And one of this will be for example Solr ;)
Creating temporary tables on disk is relatively expensive. In your scenario it sounds like it'll be slower than it's worth.
It's usually only worthwhile to create temporary tables in memory. But you need to know you have enough memory available at all times. If you plan to support so many searches per second this is not a good solution.
MySQL has full-text searching built-in. It's good for small systems. This would likely perform far better than your temp table and JOIN. But if you want to support thousands of searches per second I would not recommend it. It could consume too much of your overall database performance. Plus you're then forced to use MyISAM for storage which might have its own issues in your scenario.
For so many searches you'll want to offload the work to another system. Plenty of searching systems with scoring already exist. Take a look at ElasticSearch, Solr/Lucene, Redis, etc.
From the code you give, I really don't think tmp tables are needed, nor is FULLTEXT searching. But ... about tmp table performance:
The creation/cleanup of the tmp table is not written to transaction logs, so it will be relatively quick for the OS to do the I/O involved. If the temporary tables will be small and short-lived, and you have lots of buffers available for the OS, the disk realistically wont even be touched. If you think it will be anyways, get an SSD drive, and get more RAM.
But if you are realistic that you are looking at hundreds of thousands of searches per second then you have a big engineering project on hand. Why not just do:
select images.* from images where name in ('some', 'search', 'query')
?
I see a lot of statements like: "Cassandra very fast on writes", "Cassandra has reads really slower than writes, but much faster than Mysql"
On my windows7 system:
I installed Mysql of default configuration.
I installed PHP5 of default configuration.
I installed Casssandra of default configuration.
Making simple write test on mysql: "INSERT INTO wp_test (id,title) VALUES ('id01','test')" gives me result: 0.0002(s)
For 1000 inserts: 0.1106(s)
Making simple same write test on Cassandra: $column_faily->insert('id01',array('title'=>'test')) gives me result of: 0.005(s)
For 1000 inserts: 1.047(s)
For reads tests i also got that Cassandra is much slower than mysql.
So the question, does this sounds correct that i have 5ms for one write operation on Cassadra? Or something is wrong and should be at least 0.5ms.
When people say "Cassandra is faster than MySQL", they mean when you are dealing with terabytes of data and many simultaneous users. Cassandra (and many distributed NoSQL databases) is optimized for hundreds of simultaneous readers and writers on many nodes, as opposed to MySQL (and other relational DBs) which are optimized to be really fast on a single node, but tend to fall to pieces when you try to scale them across multiple nodes. There is a generalization of this trade-off by the way- the absolute fastest disk I/O is plain old UNIX flat files, and many latency-sensitive financial applications use them for that reason.
If you are building the next Facebook, you want something like Cassandra because a single MySQL box is never going to stand up to the punishment of thousands of simultaneous reads and writes, whereas with Cassandra you can scale out to hundreds of data nodes and handle that load easily. See scaling up vs. scaling out.
Another use case is when you need to apply a lot of batch processing power to terabytes or petabytes of data. Cassandra or HBase are great because they are integrated with MapReduce, allowing you to run your processing on the data nodes. With MySQL, you'd need to extract the data and spray it out across a grid of processing nodes, which would consume a lot of network bandwidth and entail a lot of unneeded complication.
Cassandra benefits greatly from parallelisation and batching. Try doing 1 million inserts on each of 100 threads (each with their own connection & in batches of 100) and see which ones is faster.
Finally, Cassandra insert performance should be relatively stable (maintaining high throughput for a very long time). With MySQL, you will find that it tails off rather dramatically once the btrees used for the indexes grow too large memory.
It's likely that the maturity of the MySQL drivers, especially the improved MySQL drivers in PHP 5.3, is having some impact on the tests. It's also entirely possible that the simplicity of the data in your query is impacting the results - maybe on 100 value inserts, Cassandra becomes faster.
Try the same test from the command line and see what the timestamps are, then try with varying numbers of values. You can't do a single test and base your decision on that.
Many user space factors can impact write performance. Such as:
Dozens of settings in each of the database server's configuration.
The table structure and settings.
The connection settings.
The query settings.
Are you swallowing warnings or exceptions? The MySQL sample would on face value be expected to produce a duplicate key error. It could be failing while doing nothing at all. What Cassandra might do in the same case isn't something I'm familiar with.
My limited experience of Cassandra tell me one thing about inserts, while performance of everything else degrades as data grows, inserts appear to maintain the same speed. How fast it is compared to MySQL however isn't something I've tested.
It might not be so much that inserts are fast but rather tries to be never slow. If you want a more meaningful test you need to incorporate concurrency and more variations on scenario such as large data sets, various batch sizes, etc. More complex tests might test latency for availability of data post insert and read speed over time.
It would not surprise me if Cassandra's first port of call for inserting data is to put it on a queue or to simply append. This is configurable if you look at consistency level. MySQL similarly allows you to balance performance and reliability/availability though each will have variations on what they allow and don't allow.
Outside of that unless you get into the internals it may be hard to tell why one performs better than the other.
I did some benchmarks of a use case I had for Cassandra a while ago. For the benchmark it would insert tens of thousands of rows first. I had to make the script sleep for a few seconds because otherwise queries run after the fact would not see the data and the results would be inconsistent between implementations I was testing.
If you really want fast inserts, append to a file on ramdisk.
Mysql has memory based data engines, which means it keeps the data in RAM.
There are two types of memory storage engine in Mysql as far as I know that use memory,
One is Memory engine itself
The not very cool feature of this storage engine is that only creates virtual tables which means if the server is restarted the data is lost
The other one is Cluster storage engine
This doesn't have the drawback of the previous engine, it uses memory but it also keeps a file based record of data.
Now the question is if your Database is already using RAM to store and process data, do you need to add another caching engine like Memcached in order to boost your product's performance?
How fast is a memory engined database compared to Memcached?
Does Memcache add any features to your products that a memory engined database doesn't?
Plus memory engined database gives you more features like being able to request queries, compared to Memcached which will only let you get raw data, so Memcached is kind of like a database engine that only supports SELECT command.
Am I missing something?
It depends how you use memcached. If you use it to cache a rendered HTML page that took 30 SQL queries to build, then it will give you a performance boost even over an in-memory database.
A (relational) database and caching service are complementary. As pointed out, they have different design goals and use-cases. (And yet I find the core advantage of a database missing in the post.)
Memcached (and other caches) offer some benefits that can not be realized under an ACID database model. This is a trade-off but such caches are designed for maximum distribution and minimum latency. Memcached is not a database: it is a distributed key-value store with some eviction policies. Because it is merely a key-value store it can "skip" many of the steps in querying database data -- at the expensive of only directly supporting 1-1 operations. No joins, no relationships, a single result, etc.
Remember, a cache is just that: a cache. It is not a reliable information store nor does it offer the data-integrity/consistency found in a (relational) database. Even cache systems which can "persist" data do not necessarily have ACID guarantees (heck, even MyISAM isn't fully ACID!). Some cache systems offer much stronger synchronization/consistency guarantees; memcached is not such a system.
The bottomline is, be because of memcache's simple design and model, it will win in latency and distribution over the realm it operates on. How much? Well, that depends...
...first, pick approach(es) with the required features and guarantees and then benchmark the approaches to determine which one(s) are suitable (or "best suited") for the task. (There might not even be a need to use a cache or "memory database" at all or the better approach might be to use a "No SQL" design.)
Happy coding.
Memcached can store up to 1mb of data. What it does is that it leverages the db load in such way that you don't even connect to the db in order to ask it for data. Majority of websites have a small amount of data that they display to the user (in terms of textual data, not the files themselves).
So to answer - yes, it's a good idea to have Memcached too since it can help you so you don't even connect to the db, which removes some overhead at the start.
On the other hand, there's plethora of engines available for MySQL. Personally, I wouldn't use memory engine for many reasons - one of them being the loss of data. InnoDB, the default MySQL engine as of recent release - already stores the working data set in the memory (controlled by innodb_buffer_pool variable) and it's incredibly fast if the dataset can fit in the memory.
There's also TokuDB engine that surpasses InnoDB in terms of scaling, both better than memory engine. However, it's always a good thing to cache the data that's frequently accessed and rarely changed.
I've seen several database cache engines, all of them are pretty dumb (i.e.: keep this query cached for X minutes) and require that you manually delete the whole cache repository after a INSERT / UPDATE / DELETE query has been executed.
About 2 or 3 years ago I developed an alternative DB cache system for a project I was working on, the idea was basically to use regular expressions to find the table(s) involved in a particular SQL query:
$query_patterns = array
(
'INSERT' => '/INTO\s+(\w+)\s+/i',
'SELECT' => '/FROM\s+((?:[\w]|,\s*)+)(?:\s+(?:[LEFT|RIGHT|OUTER|INNER|NATURAL|CROSS]\s*)*JOIN\s+((?:[\w]|,\s*)+)\s*)*/i',
'UPDATE' => '/UPDATE\s+(\w+)\s+SET/i',
'DELETE' => '/FROM\s+((?:[\w]|,\s*)+)/i',
'REPLACE' => '/INTO\s+(\w+)\s+/i',
'TRUNCATE' => '/TRUNCATE\s+(\w+)/i',
'LOAD' => '/INTO\s+TABLE\s+(\w+)/i',
);
I know that these regexs probably have some flaws (my regex skills were pretty green back then) and obviously don't match nested queries, but since I never use them that isn't a problem for me.
Anyway, after finding the involved tables I would alphabetically sort them and create a new folder in the cache repository with the following naming convention:
+table_a+table_b+table_c+table_...+
In case of a SELECT query, I would fetch the results from the database, serialize() them and store them in the appropriate cache folder, so for instance the results of the following query:
SELECT `table_a`.`title`, `table_b`.`description` FROM `table_a`, `table_b` WHERE `table_a`.`id` <= 10 ORDER BY `table_a`.`id` ASC;
Would be stored in:
/cache/+table_a+table_b+/079138e64d88039ab9cb2eab3b6bdb7b.md5
The MD5 being the query itself. Upon a consequent SELECT query the results would be trivial to fetch.
In case of any other type of write query (INSERT, REPLACE, UPDATE, DELETE and so on) I would glob() all the folders that had +matched_table(s)+ in their name all delete all the file contents. This way it wouldn't be necessary to delete the whole cache, just the cache used by the affected and related tables.
The system worked pretty well and the difference of performance was visible - although the project had many more read queries than write queries. Since then I started using transactions, FK CASCADE UPDATES / DELETES and never had the time to perfect the system to make it work with these features.
I've used MySQL Query Cache in the past but I must say the performance doesn't even compare.
I'm wondering: am I the only one who sees beauty in this system? Is there any bottlenecks I may not be aware of? Why do popular frameworks like CodeIgniter and Kohana (I'm not aware of Zend Framework) have such rudimentary DB cache systems?
More importantly, do you see this as a feature worth pursuing? If yes, is there anything I could do / use to make it even faster (my main concerns are disk I/O and (de)serialization of query results)?
I appreciate all input, thanks.
I can see the beauty in this solution, however, I belive it only works for a very specific set of applications. Scenarios where it is not applicable include:
Databases which utilize cascading deletes/updates or any kind of triggers. E.g., your DELETE to table A may cause a DELETE from table B. The regex will never catch this.
Accessing the database from points which do not go through you cache invalidation scheme, e.g. crontab scripts etc. If you ever decide to implement replication across machines (introduce read-only slaves), it may also disturb the cache (because it does not go through cache invalidation etc.)
Even if these scenarios are not realistic for your case it does still answer the question of why frameworks do not implement this kind of cache.
Regarding if this is worth pursuing, it all depends on your application. Maybe you care to supply more information?
The solution, as you describe it, is at risk for concurrency issues. When you're receiving hundreds of queries per second, you're bound to hit a case where an UPDATE statement runs, but before you can clear your cache, a SELECT reads from it, and gets stale data. Additionally, you may run in to issues when several UPDATEs hit the same set of rows in a short time period.
In a broader sense, best practice with caching is to cache the largest objects possible. E.g., rather than having a bunch of "user"-related rows cached all over the place, it's better to just cache the "user" object itself.
Even better, if you can cache whole pages (e.g., you show the same homepage to everyone; a profile page appears identical to almost everyone, etc.), that's even better. One cache fetch for a whole, pre-rendered page will dramatically outperform dozens of cache fetches for row/query level caches followed by re-rending the page.
Long story short: profile. If you take the time to do some measurement, you'll likely find that caching large objects, or even pages, rather than small queries used to build those things, is a huge performance win.
While I do see the beauty in this - especially for environments where resources are limited and can not easily be extended, like on shared hosting - I personally would fear complications in the future: What if somebody, newly hired and unaware of the caching mechanism, starts using nested queries? What if some external service starts updating the table, with the cache not noticing?
For a specialized, defined project that urgently needs a speedup that cannot be helped by adding processor power or RAM, this looks like a great solution. As a general component, I find it too shaky, and would fear subtle problems in the long run that stem from people forgetting that there is a cache to be aware of.
I suspect that the regexes may not provide for every case - certainly they don't seem to deal with the scenario of mixing base table names and the tables themselves. e.g. consider
update stats.measures set amount=50 where id=1;
and
use stats;
update measures set amount=50 where id=1;
Then there's PL/SQL.
Then there's the fact that it depends on every client opting in to an advisory control mechanism i.e. it pre-supposes that all the database access is from machines implementing the caching control mechanism on a shared filesystem.
(as a small point - wouldn't it be simpler to just check the modification times on the data files to determine if the cached version of a query on a defined set of tables is still current, rather then trying to identify if the cache control mechanism has spotted an update - it would certainly be a lot more robust)
Stepping back a bit, implementing this from scratch using a robust architecture would mean that all queries would have to be intercepted by the control mechanism. The control mechanism would probably need a more sophisticated query parser. It certainly requires a common storgae substrate for all the instances of the control mechanism. It probably needs an understanding of the data dictionary - all things which are already implemented by the database itself.
You state that "I've used MySQL Query Cache in the past but I must say the performance doesn't even compare."
I find this rather odd. Certainly when dealing with large result sets from queries, my experience is that loading the data into the heap from a database is a lot faster than unserializing large arrays - although large result sets are rather atypical of web based applications.
When I've tried to speed up database access (after fixing everything else of course) then I've gone down the route of replicating and partitioning data across multiple DBMS instances.
C.
This is related to the problem of session splitting when working with multiple databases in a master-slave configuration. Basically, a similar set of regular expressions are used to determine which tables (or even which rows) are being read from or written to. The system keeps track of which tables were written to and when, and when a read to one of those tables comes up, it's routed to the master. If a query is reading from a table whose data needn't be up-to-the-second accurate, then it's routed to the slave. Generally, information only really needs to be current when it's something a user changed themselves (i.e., editing a user's profile).
They talk about this a good bit in the O'Reilly book High Performance MySQL. I used it quite a bit when developing a system for handling session splits back in the day.
The improvement you describe is to avoid invalidating caches that are guaranteed to not have been affected by an update because they draw data from a different table.
That is of course nice, but I am not sure if it is fine-grained enough to make a real difference. You would still be invaliding lots of caches that did not really need to be (because the update was on the table, but on different rows).
Also, even this "simple" scheme relies on being able to detect the relevant tables by looking at the SQL query string. This can be difficult to do in the general case, because of views, table aliases, and multiple catalogs.
It is very difficult to automatically (and efficiently) detect whether a cache needs to be invalidated. Because of that, you can either use a very simple scheme (such as invalidating on every update, or per table, as in your system, which does not work too well when there are many updates), or a very hand-crafted cache for the specific application with deep hooks into the query logic (probably difficult to write and hard to maintain), or accept that the cache can contain stale data and just refresh it periodically.
An SQL database is overkill if your storage needs are small. When I was young and dumb, I used a text file and flock()ed it when I needed to access it. This doesn't scale, but I still feel that non-database solutions have been completely ignored in Web 2.0.
Does anyone not use an SQL database for storage? What are the alternatives?
There are a lot of alternatives. But having SQLite which gives you SQL power combined with no fuss of file based storage, there is no need to look for these alternatives. SQLite is light enough to be used in cell phones and MP3 players, so I don't see how it could be considered an overkill.
So unless your application needs something very specific, don't bother. Most alternatives are a lot harder to use and have less performance.
SQLite is invented for this.
It's just a flat-file that contains a complete SQL database. You can query, update, insert, delete, there's little to no overhead in installation and all you need is the driver (which comes standard in PHP )
SQLite is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine.
Kind of weird that nobody mentioned this already?
CouchDB (http://couchdb.apache.org/index.html) is a non-sql database, and seems to be a popular project these days, as well as Google's bigtable, or GT.M (http://sourceforge.net/projects/fis-gtm) which has been around forever.
Object databases abound as well; dbforobjects (http://www.db4o.com/), ZODB (http://www.zope.org/Products/StandaloneZODB), just to name a few.
All of these are supposedly faster and simpler than traditional SQL databases for certain use cases, but none approach the simplicity of a flat file.
A distributed hash table like google bigtable or hadoop is a simple and scalable non SQL database and often suits the websites far better than a SQL database. SQL is great for complex relational data, but most websites don't have this requirement. Most websites store and retrieve data in a few forms and don't need to run complex operations on the data.
Take a look at one of these solutions as they will provide all of the concurrent access that you need but don't subscribe to the traditional ideas of data normalisation. They can be thought of as pretty analogous to a bunch of named text files.
It probably depends how dynamic your web site is. I used wiki software once that used RCS to check in and out text files. I wouldn't recommend that solution for something that gets as many updates as StackOverflow or Wikipedia. The thing about database is that they scale well, and the database engine writers have figured out all the fiddly little details of simultaneous access, load balancing, replication, etc.
I would say that it doesn't depend on whether you store less or more information, it depends on how often you are requesting the stored data. Databasemanagers are superb on caching queries, so they are often the better choice performance wise. How ever, if you don't need a dynamic web page and are just loading static data - maybe a text file is the better option. Which format the data is stored in (i.e. XML, JSON, key=pair) doesn't matter - it's I/O operations that are performance heavy.
When I'm developing web applications, I always use a RDBMS as the primary data holder. If the web application don't need to serve dynamic data at every request, I simply apply a cache functionality storing the data in a cache file that gets requested when no new data have been added to the primary data source (the RDBMS).
I wouldn't choose whether to use an SQL database based on how much data I wanted to store - I would choose based on what kind of data I wanted to store and how it is to be used.
Wikipeadia defines a database as: A database is a structured collection of records or data that is stored in a computer system. And I think your answer lies there: If you want to store records such as customer accounts, access rights and so on then a DB such as mySQL or SQLite or whatever is not overkill. They give you a tried and trusted mechanism for managing those records.
If, on the other hand, your website stores and delivers unchanging file-based content such as PDFs, reports, mp3s and so on then simply storing them in a well-defined directory layout on a disk is more than enough. I would also include XML documents here: if you had for example a production department that created articles for a website in XML format there is no need to put them in a DB - store them on disk and use XSLT to deliver them.
Your choice of SQL or not will also depend on how the content you wish to store is to be retrieved. SQL is obviously good for retrieving many records based on search criteria whereas a directory tree, XML database, RDF database, etc are more likely to be used to retrieve single records.
Choice of storage mechanism is very important when trying to scale high-traffic site and stuffing everything into a SQL DB will quickly become a bottleneck.
It depends what you are storing. My blog uses Blosxom (written in Perl but a similar thing could be done for PHP) where each individual entry is a separate text file. The first line is plain text (the title) and the rest is unrestricted HTML. Following a few simple rules, these are rendered to form a simple but effective blogging framework.
It does have drawbacks but it also means that each post is a discrete file, which works well for updating on a local machine and then publishing to a remote web server. This is limited when it comes to efficient querying though, so certainly not a good choice if you want fine-grained control and web-based interaction with your data.
Check CouchDB.
I have used LINQ to XML as a data source in a .NET project. It was a small solution, and used caching to mitigate performance concerns. I would do it again for the quick site that just needs to keep data in a common place without increasing server requirements.
Depends on what you're storing and how you need to access it. Generally sql provides great reporting and manual management ability. Almost everything needs some way to manage what's stored and report on it.
In Perl I use DBM or Storable for such tasks. DBM will update automatically when variable is updated.
One level down from SQL databases is an ISAM (Indexed Sequential Access Method) - basically tables and indexes but no SQL and no explicit relationships among tables. As long as the conceptual basis fits your design, it will scale nicely. I've used Codebase effectively for a long time.
If you want to work with SQL-database-type data, then consider FileMaker.
A Simple answer is that you can use any data storage format, from standard defined, to database (which generally involved a protocol), even a bespoke file-format.
There are trade-offs for every choice you make in IT, and certainly websites are no different. In the early 2000's file-based forum systems were popular as it allows anyone with limited technical ability to edit pages and posts. Completely static sites swiftly become unmanageable and content does not benefit from upgrades to the site user-interface; however the site if coded correctly can simply be moved to a sub-directory, or ripped into the new design. CMS's and dynamic systems bring with them their own set of problems, namely that there does not yet exist a widely adopted standard for data storage amongst them; that they often rely on third-party plugins to provide features between design styles (despite their documentation advocating for separation of function & form).
In 2016, it's pretty uncommon not to use a standard storage mechanism, such as a *SQL RDBMS; although static site generators such as Jekyll (powers a lot of GitHub pages); and independent players such as October CMS still provision for static file-based storage.
My personal preference is to use an *SQL enabled RDBMS, it provides me syntax that is standardised at least at the vendor level, familiar and powerful syntax, but unlike a lot of people I don't think this is the only way, and in most cases would advocate for using a site-generator to save parts that don't have to be dynamic to a static store as this is the cheapest way to live on the web.
TLDR; it's up to you, SQL & RDBMS backed are popular.
Well, this is a bit of an open-ended question from the OP and there are two questions ... around SQL alternatives and non-SQL.
In general, in the "Why is SQL good" category ... it's a mature and robust standard that provides referential-integrity. Java JDBC supports it fully as do tools like TOAD and there a many SQL implementations such as SQL-Lite referenced earlier.
Now specific to a "for a web-site" is not particularly indicative of anything. Does a web-site need referential integrity? Maybe. If the business nature of the web-site is largely unstructured content, then one can consider any kind of persistent storage really from so called "no-SQL" databases like AWS DynamoDB to Mongo (not a fan though).
For managing the complexities of SQL stores - one suggestion versus a list of every persistence store ever created ... is AWS Aurora (part of RDS service). It is multi-region active-active and fully MySQL-compliant. JDBC/ODBC based driver frameworks would work out-of-the-box and it effectively offers "zero administration".
I would check out XML if I were you. See w3schools XML tutorial section on the left side. Tons of possibilities without using SQL database.