Use MySQL Trigger to Limit Inserts? - php

We have an internal PHP web app that's used for scheduling. We often have multiple users trying to schedule appointments for the same time slot, but we have a hard limit on how many appointments we can have per available time.
We've used PHP to verify that there are available slots before booking, but there's still enough time between PHP checking the table and the insert that overbooking can still happen.
I believe the solution is a MySQL trigger that checks the table before the insert. The problem is that I need MySQL to be able to count the number of records that have the same "schedule_id" and "schedule_user_date" as the record about to be inserted (this will be how many appointments already exist for that time slot).
I have to somehow let the trigger know what the maximum time slot is, which is where I'm stuck, since this can change from client to client.
If you have other suggestions other than a MySQL trigger, I'd like to hear about those as well.

Related

How to initiate and perform a DB action periodically in php/mysql

Im using php/mysql and have to update a user's counter at a given frequency from their creation date. For category1 users, the counter increments after every 24 hours for 3 days, for category 2 after each week for 4 weeks..
I thought of creating a trigger on users table so that after each insertion I create an event that starts at the creation and has the desired frequency. But I don't know how the DB could react especially that several thousand of events will be created.
Other possibility is a cronjob that will check the creation date, category and the last occurence every hour for example. But this will not match the date/time of creation and also have to check every row to do the job when the update is not applied to some users.
which approch is best and won't impact the DB performance (very large number of users). Is there other efficient way to do this?
Wouldn't recommend adding a crontab entry for each user. Two options here MySQL events or using a queue like beanstalkd and then adding a job into the queue.
I'm more inclined towards MySQL events.
Here is a doc:
http://www.mysqltutorial.org/mysql-triggers/working-mysql-scheduled-event/

How to get the time difference in seconds between two db column status (or events)

I am trying to get the time difference in seconds in my database between two events. On the database, the users table has different columns but I'm working with the status column. If a user places an order on our website the status column shows "pending" and, if it is confirmed by our agents it then switches to "success". I'm trying to get the time difference (in secs) between when it shows pending and when it shows success.
NB: I'll be glad if anyone can explain the time() function in PHP with an example.
You can use MySQL's unix_timestamp() function. Assuming that your table has two records for the two events, and there is a column called eventTime, then two queries like the following can give you the two values which are the respective number of seconds since Epoch. You can subtract the latter by the former and get the time difference
select unix_timestamp(eventTime) ... where status='pending'
select unix_timestamp(eventTime) ... where status='success'
Update
After re-reading your question, I guess your DB design is that there is only one row for the whole life cycle of the transaction (from pending to success). In this case, if all three parties (the agent who updates the status to pending, the agent who updates the status to success, and the agent who needs to find the time difference between the two events) involved are the same thread, then you can keep the two event time in memory and simply compute the difference.
However, I think it is more likely that the three parties are two or three different threads. In this case, I think you must have some mechanism to pass the knowledge (of the first event time) from one thread to another. This can be done by way of adding a new column called lastUpdateTime, or by adding a new table for the purpose of time tracking.
By the way, if you use the second approach, I think MySQL Trigger may be useful for you (so that whenever the table gets updated, it trigger another command to update the second table which is used solely to keep track of event time of elapsed time). This approach allows you to not changing the original table but just add a new table.

Automatic deletion of SQL recrods over 5 seconds old using phpMyAdmin

So I'm using WampServer with the default phpMyAdmin to store this SQL table called Typing.
Table: Typing
Now I want to set the typing column to 0 for any row that has set the typing column to 1 more than five seconds ago.
For ex. I just set the typing column to 1 for the first row and my database detects the time since this 1 has been written, then it sets a 5 second timer to revert that 1 back to a 0. If 1 is overwritten with another 1 during that time, that timer should rest.
How should I go about this? Should I have a column for a 'timestamp' of each record? How do I make my database constantly check for entries older than 5 seconds without user input? Do I need an always on PHP script or a database trigger and how would I go about that?
As #JimL suggested, it might be a bit too ambitious to purge the records after only five seconds.
It might be helpful to have more information about what you're trying to accomplish, but I'll answer in a generic way that should answer your question
How I would handle this is that any queries should check for records that are less than five seconds old (I assume you're querying the data and only want records that are less than five seconds, otherwise I'm not really following the point of your question).
Once a day, or hourly if you have that much data, you can run a scheduled job (scheduled through MySQL itself, not through cron/Windows Scheduled Tasks) to purge the old records. You can use phpMyAdmin to set that up (the "Events" tab), although it's actually a MySQL feature that doesn't require phpMyAdmin.
I got it, I added a timestamp to each record and used this code:
mysqli_query($con,"DELETE FROM 'typing' WHERE TIMESTAMPDIFF(SECOND,recordDate, CURRENT_TIMESTAMP) > 1");
It's not a chron job though so it only runs if there is someone accessing the site, but it's good enough for what I need. Thanks for the help everyone :)

Sybase IQ cache database result

I'm using a query that calculates some values on a table with about 11 millions rows. And I need to display the results in real time (on my site), but this calculations need about 1min to execute. The table content changes each 30 mins, so I don't have to recalc the results at each time user reloads the page. How can I cache the results of calculations? Via php (I use odbc) or using some sql statement, some sybase IQ option. Thanks.
I also asked this question at https://dba.stackexchange.com/. So sorry for duplicating, can't figure out where is the better place.
So I found the solution. Not optimized, but helpful for me. I insert my calculations into temp table, and add there a column with current date. On a script start I'm checking if table is older then 30mins, and if so, I drop it and crwate again.

Mysql - Summary Tables

Which method do you suggest and why?
Creating a summary table and . . .
1) Updating the table as the action occurs in real time.
2) Running group by queries every 15 minutes to update the summary table.
3) Something else?
The data must be near real time, it can't wait an hour, a day, etc.
I think there is a 3rd option, which might allow you to manage your CPU resources a little better. How about writing a separate process that periodically updates the summarized data tables? Rather than recreating the summary with a group by, which is GUARANTEED to run slower over time because there will be more rows every time you do it, maybe you can just update the values. Depending on the nature of the data, it may be impossible, but if it is so important that it can't wait and has to be near-real-time, then I think you can afford the time to tweak the schema and allow the process to update it without having to read every row in the source tables.
For example, say your data is just login_data (cols username, login_timestamp, logout_timestamp). Your summary could be login_summary (cols username, count). Once every 15 mins you could truncate the login_summary table, and then insert using select username, count(*) kind of code. But then you'd have to rescan the entire table each time. To speed things up, you could change the summary table to have a last_update column. Then every 15 mins you'd just do an update for every record newer than the last_update record for that user. More complicated of course, but it has some benefits: 1) You only update the rows that changed, and 2) You only read the new rows.
And if 15 minutes turned out to be too old for your users, you could adjust it to run every 10 mins. That would have some impact on CPU of course, but not as much as redoing the entire summary every 15 mins.

Categories