I'm using PHP and MySQL for social network system
I have MySQL table named member_feed , in this table I save feeds for each member, my table structure is :
|member_feed_id | member_id | content_id | activity_id | active
in this table I have more than 120 million record, and every member has set of record.
I do this queries on this table :
1 - select * from member_feed where memebr_id= $memberId
2- update memebr_feed set active = 1 where member_id = $member_id
3- insert into member_feed (member_id, content_id,activity_id,active values(2,3,59,1) )
for each member daily I have at least new 50 feed by beanstalkd system and I have more than 60000 member.
I currently work to migrate from MySQL to MongoDB and I'm new on MongoDB. so I need to convert this table to collection in MongoDB.
is this use cases good for mongodb and good solution for my system to increase system performance?
Well, In mongodb joins between two collection is not available so, my question is where you will store member,content, activity? i believe these should embedded in the member_feed.
Example:
member_feed Structure:
{_id:1,member:{_id:2,name:"Anuj"},activity:{details:"xyz"},active:false}
Hope that helps!!!
Thanks
Related
I want to create a rating system (Ajax) with PHP for some images. Only registered users will be able to vote so i am going to store their id.
My question is should i create a table in MySQL where i will store the user id and photo id for all the images/ratings or should i write a script which will create a seperate SQLite file/database for EACH image where it will store the users' ids who voted.
I am going to use this table only to check if the user voted for this image or not. Total votes and score will be stored in another MySQL table.
Which would be faster?
MySQL (containing users' ids who voted from ALL the images)
image_id | user_id
---------------------
114 | 12
114 | 24
114 | 53
114 | 1
or
1 SQLite file foreach image rating
image_114.sqlite
user_id
-------
12
24
53
1
My recommendation is to use the structure from your first example, but without a hard dependency on MySQL, so SQlite could still be used. Here is why:
I consider your second example abuse of a database. Assume, you want to show a page containing the highest ratest images .. how would you do so with the second approach?
The first approach seems quite fine to me, but since the structure is so trivial, why include a dependency on MySQL? Being able to run on "just about every" SQL makes the app a lot more portable
i'm developing a webshop with a MySQL database for a client.
This client already invoice management website with a MySQL database.
Now I want to write a php script thats triggered by a cronjob to sync invoice, client and product records.
order record:
id | clientId | status | shipping | reduction
*order_items records:*
id | productId | price |amount | orderId
client record:
id | fname | name | email | ...
Note that only order records with status = 2 should be synchronised, after they have been synchronised, the status should change to 3.
Both databases are using different tables for orders and invoices
What is the best way to do this?
1) Select records
2) Loop over records
3) start transaction (optional)
4) Insert records in db2
5) Update records in db1
6) commit
Are the databases runing on seperate instances of MySQL?
If so...
What is the best way to do this?
Use the same database structure on both systems and use mysql's asynchronous replication.
Failing that, use the federated engine to access tables from one instance on the other, then follow the procedure for same instance below.
If on same MySQL instance....
Make sure you've got an indexed update timestamp on every source table you want to sync and copy the records which are eligible for copying and which have been modified since the last capture.
**you could look into some phpclasses that do this for you http://www.phpclasses.org/search.html?words=mysql+sync&x=0&y=0&go_search=1
Source :- How can i synchronize two database tables with PHP?**
I have a field in my table that I need to move to a completely different database. At this point I have 1 database db1 that has db1table that has all the data, and an empty database db2 that has db2table.
db1 table looks like this:
id other_db_id data_to_be_moved
---------------------------------------
1 NULL data
2 NULL data
3 NULL data
4 NULL data
5 NULL data
db2 table looks like this:
id data
--------------
empty
I usually use an ORM to access the database, but this time I'm doing it with plain mysql and php, so need a little help, especially with how I'd connect to 2 databases at the same time.
What I'd like to do is select the first 10 records from db1 table, read the field data_to_be_moved and use it to create a new record in db2 table. Then get the id of the newly inserted record and insert back in the original database as field other_db_id.
The way I'm connecting to a single database is this. How will I access both databases at the same time?
$connection = mysql_connect("localhost", "db1user","db1pass");
mysql_select_db("db1", $connection);
and I'm selecting the first 10 records to be manipulated as follows:
Select * From table Where Id BETWEEN 5 AND 10;
but I'm not sure how to proceed with the switching of the databases to achieve what I described above.
Basically you need to know how to handle multiple databases.
The following video will explain how to deal whit two (or more) databases: video
You could store the intermediate values into a PHP variable, then switch database and do your thing.
If the databases are on same server you can access both tables at same time by using syntax:
insert into db2.db2table
select other_db_id, data_to_be_moved
from db1.db1table;
This requires that the login has at least select access to other database.
If the databases are on different server, you can use federated table:
http://dev.mysql.com/doc/refman/5.0/en/federated-use.html
Um,
$connection2 = mysql_connect("localhost", "db2user","db2pass");
I have two database tables, one in MYSQL and one in MSSQL. Both have similar data, and one is based on data from another. They're in two different databases because one is a remote system that is administered and the local system is a Drupal installation which I'm using to show the data in a more friendly manner through a custom module.
For example, I have a table of this sort of structure in MSSQL:
ID | Title | Description | Other fields I don't care about
And based on pulling data from this table I generate a table in MYSQL:
local_id | remote_id | title | description
When the module is initialized, it goes out and does a select from the MSSQL table and generates records and populates the local database. Remote_id is the ID field in the MSSQL database so we can reference the two records together.
I need to sync up this data, deleting records locally which no longer exist on the remote table and creating new records which do not exist locally, and also update all rows information.
Problem is, that sort of requires at least 2 different transactions with possible by-row transactions as well. Example:
To sync local to Remote and remove non-existent remote records:
Select remote_id from local_table;
For Each remote_id ( select ID, title, description FROM remote_table where ID = remote_id )
If record exists
UPDATE local_table WHERE remote_id = row_id
Else
DELETE FROM local_table where remote_id = row_id
Then we need at least one other transaction to get new records (I could update here too if I didn't do it in the previous loop):
Select ID, title, description from remote_table;
For each ID ( Select remote_id from local_table )
If does not exist
INSERT INTO local_table (VALUES)
So that's a lot of db activity. It would be easier if the tables were of the same type but as it is that's the only way I know how to do it. Is there a better way? Could I just pull both result sets into an associative array and compare that way and only do the transactions necessary to remove and create? I'm unsure.
there are a lot of ways to do this based on the system you house.
The first assumption i am making is that you have 2 databases and you want to sync data between these 2
that is MSSQL db must pull data from MySQL and vice versa
Your approach of using associative arrays is good but what if there are 100 columns in the table? ( in your case it is not but the approach is not future proof)
So to update 1 row you need to make "n" column comparisons if there are 100 rows, then there will be 100*n comparisons
Have a look at MySQL REPLACE, INSERT INTO .. ON DUPLICATE KEY clauses that might help you - i dont know if there are such clauses in MSSQL
You can do other things like - have a "last_updated" column in each database table - whenever a column in the table gets updated, this time-stamp field must be updated
This way you can tell if a row in either database table was updated ( by comparing it to your old timestamp value) and only update those rows
logic would be in these lines
to sync local to remote
foreach localrow
get the common_id of the row
get the timestamp of the row
check if a row with this common_id exists in the remote table
if no then insert
if yes then
compare timestamps between local and remote row
if local row timestamp > remote row timestamp then update remote row
Rather than do a row by row operations you could do set based operations. e.g.
INSERT INTO local_table (vales)
SELECT .. FROM remote_table
WHERE NOT EXISTS (Select ... FROM local_table WHERE remote_table.field = local_table.field and ...)
In order to do that you'll need to add a linked server See sp_addlinkedserver. You can create a link from SQL Server to any server listed on the page. This includes any Database that has an ODBC driver which MySQL does.
I am not aware if MySQL is capable of doing the reverse.
I'm learning basic PHP MySQL right now.
As of now I have 2 tables; a zoo table (parent table) and a species table (child table).
The zoo table contains ID (PK) and animal_name. The species table contains an ID (PK) and animal_id (FK; zoo.ID) and species_name.
My question here is what is the best practice to insert into a multiple table when I create a row in the zoo table.
Currently, the idea that comes to my mind is to have 2 SQL statement and the process is like this:-
Insert animal_name to zoo
Get last ID in zoo table.
Close cursor
Insert animal_id (last insert ID of
zoo) and species_name to species
table
Close cursor
Is this the best practice? Is there anyway I can improve this process with scalability in mind? (i.e. when I add more tables with foreign key referencing to zoo table in the near future?
I have searched around here and some comrades suggested triggers but however it is not supported in MyISAM storage engine. I'm using MyISAM engine with PHP PDO MySQL object.
With PDO, you can get the ID of the row you just inserted without another query - just do
$newRowID = $conn->lastInsertId();
where $conn is your database connection.
Use the MySQL built-in facilities for finding the last generated identity.
So for your Item 2, use this SQL statement:
`SELECT LAST_INSERT_ID()`
This gets the last auto incremented ID on your connection. Load that value into a variable, and use that in your Item 4 INSERT statement.
How to Get the Unique ID for the Last Inserted Row with MySQL