MySQL or SQLITE to store who voted - php

I want to create a rating system (Ajax) with PHP for some images. Only registered users will be able to vote so i am going to store their id.
My question is should i create a table in MySQL where i will store the user id and photo id for all the images/ratings or should i write a script which will create a seperate SQLite file/database for EACH image where it will store the users' ids who voted.
I am going to use this table only to check if the user voted for this image or not. Total votes and score will be stored in another MySQL table.
Which would be faster?
MySQL (containing users' ids who voted from ALL the images)
image_id | user_id
---------------------
114 | 12
114 | 24
114 | 53
114 | 1
or
1 SQLite file foreach image rating
image_114.sqlite
user_id
-------
12
24
53
1

My recommendation is to use the structure from your first example, but without a hard dependency on MySQL, so SQlite could still be used. Here is why:
I consider your second example abuse of a database. Assume, you want to show a page containing the highest ratest images .. how would you do so with the second approach?
The first approach seems quite fine to me, but since the structure is so trivial, why include a dependency on MySQL? Being able to run on "just about every" SQL makes the app a lot more portable

Related

database design to speed up application

I have 2 tables,
first table stores URLs
|link_id | link_url | <== schema url_table ::: Contains 2 million+ rows
and second table stores user_bookmarks
|user_id| link_id | is_bookmarked | <== schema for user_table ::: over 3.5 million+ rows
is_bookmarked stores 1 or 0, according to the link being bookmarked by the user or not.
Here is the problem,
When a new link is added, these are the steps followed
1) Check if url already exists in url_table , which means going through millions of rows
2)if does not exist add a new row in url_table and user_table
The Database(Mysql) is simply taking too much time, due to the enormous row-set,
Also, its a very simple php+Mysql app, with no search-assisted indexing programs whatsoever.
any suggestions to speed this up?
Why not remove the column user_bookmarks.is_bookmarked and use the sole existence of an entry with user_id and link_id as indicator that the link was bookmarked?
A new link has no entries in the user_bookmarks table, because nobody bookmarked it yet. When a user bookmarks a link, you add an entry. When the user removes the bookmark, you remove the row.
To check if a user bookmarked a link or not, simply SELECT count() FROM user_bookmarks WHERE user_id=? AND link_id=?. When you receive 1 row, it is bookmarked. When you receive 0 rows, it isn't.
Speeding up the insert-query when adding a new entry in the URL table could be accelerated with an appropriate index.
If you told us what your curent schema was (i.e. the create table statements including indexes) rather than just what your column names were then we might be able to make practical suggestions as to how to improve that.
There's certainly scope for improving the method of adding rows:
Assuming that the link_url can be larger than the 767 byte limit for an Innodb table (you didn't say what engine you are using), then change the id column to contain a md5 hash of the link_url with a unique index. Then when you want to add a record, go ahead and try to insert it using INSERT IGNORE ....

Copying records accross multiple databases using PHP

I want to do something which may sound wierd.I have a database for my main application which holds few html templates created using my application.These templates are stored in a traditional RDBMS style.A table for template details and other for page details of the template.
I have a similar application for different purpose on another domain.It has a different database with the same structure as the main app.I want to move the templates from one database to the other,with all columns intact.I cannot export as both have independent content of there own i.e same in structure and differ in content. 1st is the template table and 2nd is the page table
+----+----------+----------+
| id |templatename |
+----+----------+----------+|
| 1 | File A | |
| 2 | File B | |
| 3 | File C |
| 4 | File 123 |
| .. | ....... | ........ |
+----+----------+----------+
+----+----------+----------+
| id | page_name| template_id|(foreign key from above table)
+----+----------+----------+
| 1 | index | 1 |
| 2 | about | 1 |
| 3 | contact| 2 |
| 4 | | |
| .. | ........ | ........ |
+----+----------+------------+
I want to select records from 1st database and insert them to the other.Both are on differnet domains.
I thought of writing a PHP script which will use two DB connections,one to select and the other for insert to the other DB,but I want to know if I can achieve this in any other efficient way using command line or export feature in any way
EDIT: for better understanding
I have two databases A and B both n diff servers.Both have two tables say tbl_site and tbl_pages.Now both are independently updated on their domains via application interface.I have a few templates created in database A stored in tbl_site and tbl_pages as mentioned in the question above.I want the template records to be moved to the database B
You can do this in phpMyAdmin (and other query tools, but you mention PHP so I assume phpAdmin is available for you).
On the first database run a query to select the records that you want to copy to the second server. In the "Query results operations" section of the results screen, choose "Export" and select "SQL" as the format.
This will produce a text file containing SQL INSERT statements with the records from the first database.
Then connect to the second database and run the INSERT statements from the generated file.
As other mentioned you can use phpmyadmin, but if your second database table fields are different, then you can write down a small php script to do that for you. Please follow the following steps.
Note : Consider two databases A and B, and you want to move some data from A to B and both are on different servers.
1) First allow remote access on database A server for the database A. Also get a host, username and password for database A.
2) Now using mysqli_ extension, connect to that database. As you have the host for the other database A server, so you have to use that, not localhost. On most servers, the host is the IP of the other remote server.
3) Query database table and get your results. After you get results, close the database connection.
4) Connect to database B. Please note that in this case, database B host may be localhost. Check your server settings for that.
5) Process the data you got from database A and insert them to database B table(s).
I use this same method to import data from different systems (Drupal to Prestahop, Joomla to a customized system), and it works fine.
I hope this will help
Export just data of db A (to .sql). Or use php script - can then be automated if you need to do it again
Result:
INSERT table_A values(1, 'File A')
....
INSERT table_B values(1, 'index', 1)
....
Be careful now when importing data - if you have ids the same you will get error (keep this in mind). Make any mods to the script to solve these problems (remember if you change an id for table_A you will have to change the foreign key in table_B). Again this is a process which you might be forced to automate.
Run the insert scripts in db B
As my question was a bit different I preffered answering it.Also the above comments are relevant in different scenarios so,I won't say they are totally wrong.
I had to run a script to make the inserts happen based on new ids to the target database.
To make it a bit easy and avoid cross domain request to database,I took a dump of the first database and restored it to the target.
Now I wrote a script to select records from one database and insert them to the other i.e the target.So the ids were taken care of automatically.Only the problem(not a problem actually) was I had to run the script for each record independently.

Can I use MySQL temporary tables to store search results?

I have a search page written in PHP, and it needs to search in the MySQL database, and the result need to be sortable. This search page will be accessed by many users (>1000 at any time).
However, it is not feasible to sort the search result in MySQL, as it would be very slow.
I'm thinking of storing each search result into a temporary table (not MySQL temporary table), and the table name is stored inside another table for reference like this:
| id | table_name | timeout |
-----------------------------
| 1 | result_1 | 10000 |
| 2 | result_2 | 10000 |
Then I can use the temporary tables to sort any search results whenever needed without the need to reconstruct (with some modification) the query.
Each table will be dropped, according to the specified timeout.
Assuming I cannot modify the structure of existing tables that are used in the query, would this be a good solution or are there better ways? Please advice.
Thanks
There's no need to go to the trouble of storing the results in a persistent database when you just want to cache search results in memory. Do you need indexed access to relational data? If the answer is no, don't store it in a MySQL database.
I know that phpbb (an open source web forum which supports MySQL backends) uses a key-value store to back its search results. If the forum is configured to give you a link to the specific results page (with the search id hash in the URL's query string) then that link will be valid for awhile but eventually be flushed out of the cache, just like you want. It may be overkill to implement a full database abstraction layer if you're set on MySQL though. Anyway:
http://wiki.phpbb.com/Cache
You should just use memcached or something to store the results data, and you can easily retrieve the data and sort it in PHP. Also there are some PHP-specific cache frameworks that minimize the cost of loading and offloading data from the interpreter:
https://en.wikipedia.org/wiki/List_of_PHP_accelerators

PHP Social Networkin Friends Database Table containing serialized data

I want to develop a social networking site, where users can make friends with other users, and I have the following example table:
Table Name : Friends
id | friends
id will contain the id of the user and friends will contain the ids of the user's friends in one row. The id column will be unique and primary key.
My Question
I would like to know if I can store the list of friends as a serialized array as that would limit the friend connections to only 1 row per user as against other methods described in here, which is have friends table, and insert user and friends id in each row.
During retrieval, I would unserialize the row and put it in an array.
You can do that but you'll then have to always keep it updated and there is no way to join on that information or search it accurately within a mysql query.
With an app like this you WILL need that data to be available.
If you're not comfortable with the SQL required to join the tables in the proper way just ask for help with the point of confusion / frustration =)
It can be done, but there is no advantage to it. It will be difficult to do aggregate functions, for example summing or finding the newest friend will be almost impossible. Doing joins will be impossible, without unserialising through code and making new queries. Also, you need to change the structure a bit:
Table Friends
ID | User_ID | Friend_ID
1 4 5
2 4 6
ID would be an auto-increment primary key, user id is the id of the user and friend id is the id of the friend
You shouldn't concat data (foreign id's) into an single field in an relational database. You won't be able to join/select any data. Instead use:
friends
userId, friendId
This would totally break the idea of a relational database.
You could also have a single table - person and storing on the first position the id of the person and the friends serialized after that.
The idea is that you can not make simple queries on those "database structures" like counting the number of friends, regardless to say common friends or other simple operations.
Anyway, I would recommend you to take a look at some Graph Databases and consider unsing one for your social graph

Implementing order in a PHP/MySQL CMS & dealing with concurrency

I have the following tables:
======================= =======================
| galleries | | images |
|---------------------| |---------------------|
| PK | gallery_id |<--\ | PK | image_id |
| | name | \ | | title |
| | description | \ | | description |
| | max_images | \ | | filename |
======================= \-->| FK | gallery_id |
=======================
I need to implement a way for the images that are associated with a gallery to be sorted into a specific order. It is my understanding that relational databases are not designed for hierarchical ordering.
I also wish to prepare for the possibility of concurrency, even though it is highly unlikely to be an issue in my current project, as it is a single-user app. (So, the priority here is dealing with allowing the user to rearrange the order).
I am not sure the best way to go about this, as I have never implemented ordering in a database and am new to concurrency. Because of this I have read about locking MySQL tables and am not sure if this is a situation where I should implement it.
Here are my two ideas:
Add a column named order_num to the images table. Lock the table and allow the client to rearrange the order of the images, then update the table and unlock it.
Add a column named order_num to the images table (just as idea 1 above). Allow the client to update one image's place at a time without locking.
Thanks!
Here's my thought: you don't want to put too many man-hours into a problem that isn't likely to happen. Therefore, take a simple solution that's not going to cause a lot of side effects, and fix it later if it's a problem.
In a web-based world, you don't want to lock a table for a user to do edits and then wait until they're done to unlock the table. User 1 in this scenario may never come back, they may lose their session, or their browser could crash, etc. That means you have to do a lot of work to figure out when to unlock the table, plus code to let user 2 know that the table's locked, and they can't do anything with it.
I'd suggest this design instead: let them both go into edit mode, probably in their browser, with some javascript. They can drag images around in order until their happy, then they submit the order in full. You update your order_num field in a single transaction to the database.
In this scenario the worst thing that happens is that user 1 and user 2 are editing at the same time, and whoever edits last is the one whose order is preserved. Maybe they update at the exact same time, but the database will handle that, as it's going to queue up transactions.
The fallback to this problem is that whoever got their order overwritten has to do it again. Annoying but there's no loss, and the code to implement this is much simpler than the code to handle locking.
I hate to sidestep your question, but that's my thoughts about it.
If you don't want "per user sortin" the order_num column seems the right way to go.
If you choose InnoDB for your storage subsystem you can use transactions and won't have to lock the table.
Relational database and hierarchy:
I use id (auto increment) and parent columns to achieve hierarchy. A parent of zero is always the root element. You could order by id, parent.
Concurrency:
This is an easy way to deal with concurrency. Use a version column. If the version has changed since user 1 started editing, block the save, offer to reload edit. Increment the version after each successful edit.

Categories