Retrieving timestamp of MySQL row creation from meta data? - php

I have been maintaining a MySQL table of rows of data inserted from PHP code for about a year and a half. Really stupid, I know, I didn't include the timestamp for insertion of these rows.
Is there any possible way that I can retrieve the timestamp of creation of these rows through some metadata or some other way? (Of MySQL or PHPMyAdmin or some other possible ways?)

Unfortunately, there's nothing you can do in this case. If MySQL had a secret timestamp field, the general size of tables would increase by 4 bytes per row.

The only way you can get that timestamp, if it is saved somewhere on one of your servers. You have a web server, which you may keep archive of logs for. Or some other place where there is a timestamp of activity of PHP script making requests to the database.
Say you have web server logs and there is an entry for each or most of PHP script activity, then, potentially, you can parse that log, get the timestamp and map it to the rows in your database. As you can see it is quite labourious, but not utterly impossible.
As for MySQL (or any other database) normally they do not keep a big archive of past information. Main reason for that - it is updo developer or designer of the application to decide what information should be kept or not. Database keeps only data needed for all its parts to run healthy.
Just had an idea, that is you have transaction log archive (which I really doubt), then you can re-run them on a back up of a database and may be they (transaction logs) contain timestamp of a row being added or changed.

If you are lucky, you have records in other tables that depend on the record you are interested in. These records may have a timestamp when they were created.
So you have at least a ballpark when the record you care about may have been created.
Other than that, the rate at which the primary key usually grows may provide another estimate when your record was created.
But yes, these are just estimates. Other approaches are mentioned in the other existing answers.

Here is a way to do so :
https://www.marcus-povey.co.uk/2013/03/11/automatic-create-and-modified-timestamps-in-mysql/
I hope this can help

Related

MySQL or JSON for data retrieval

So, I have situation and I need second opinion. I have database and it' s working great with all foreign keys, indexes and stuff, but, when I reach certain amount of visitors, around 700-800 co-current visitors, my server hits bottle neck and displays "Service temporarily unavailable." So, I had and idea, what if I pull data from JSON instead of database. I mean, I would still update database, but on each update I would regenerate JSON file and pull data from it to show on my homepage. That way I would not press my CPU to hard and I would be able to make some kind of cache on user-end.
What you are describing is caching.
Yes, it's a common optimization to avoid over-burdening your database with query load.
The idea is you store a copy of data you had fetched from the database, and you hold it in some form that is quick to access on the application end. You could store it in RAM, or in a JSON file. Some people operate a Memcached or Redis in-memory database as a shared resource, so your app can run many processes or threads that access the same copy of data in RAM.
It's typical that your app reads some given data many times for every single time it updates the data. The greater this ratio of reads to writes, the better the savings in terms of lightening the load on your database.
It can be tricky, however, to keep the data in cache in sync with the most recent changes in the database. In other words, how do all the cache copies know when they should re-fetch the data from the database?
There's an old joke about this:
There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton
So after another few days of exploring and trying to get the right answer this is what I have done. I decided to create another table, instead of JSON, and put all data, that was suposed to go in JSON file, in the table.
WHY?
Number one reason is MySQL has ability to lock tables while they're being updated, JSON has not.
Number two is that I will downgrade from few dozens of queries to just one, simplest, query: SELECT * FROM table.
Number three is that I have better control over content this way.
Number four, while I was searching for answer I found out that some people had issues with JSON availability if a lot of co-current connections were making request for same JSON, I would never have a problem with availability.

Handling large number of MySQL tables

I have an API which is being used by around 200 websites right now. The number is expected to grow very soon. I need to store information of each visitor (IP address etc) on clients' websites. The number of daily visitors for each client ranges from 2000 to 50000. That means I am adding 400000 to 500000 rows everyday. For that right now I am making a different table for each client.
Now the problem is when I try to fetch data from all tables combined, it takes a lot of time. How should I handle this? How should I store the data?
Thanks!
I always try to keep tables to a minimum in my schemas. Perhaps you should make a client table with relevant client information and then have a visitor table with all the visitor information. Link the two with a foreign key.
Since all the tables are the same, I'd just keep the visitor information in one table, with a column to identify the client / website.
The question then is whether a large table like that will still perform... Obviously you need your indexing and so on, but here are a couple of ideas:
Partitioning: I know nothing about partitioning on Mysql (but have tried it on Postgresql). The idea is to design the physical data storage to suit your data retrieval / work needs. Might be an idea if your table gets huge.
"Live" and "archive" tables. I'm sure there's proper terminology for this. Again, depending on how you're analysing your data, you can keep today's / this week's / this month's / whatever you need's data in the "live" table where new records are added, then have housekeeping functions that move older records to a larger archive table. The idea would be to keep only the records you want to analysis frequently in the smaller live table, so query performance is fast.
Lastly, you might be pleasantly surprised by the performance of Mysql even on large tables. I've got a Postgresql table with several million records and performance is more than adequate without any playing around.
Do not store raw data in mysql. Put visitors data into queue (based on redis, rabbitmq etc) and store only aggregated data which is necessary for your business model.

Insert a row every given time else update previous row (Postgresql, PHP)

I have a multiple devices (eleven to be specific) which sends information every second. This information in recieved in a apache server, parsed by a PHP script, stored in the database and finally displayed in a gui.
What I am doing right now is check if a row for teh current day exists, if it doesn't then create a new one, otherwise update it.
The reason I do it like that is because I need to poll the information from the database and display it in a c++ application to make it look sort of real-time; If I was to create a row every time a device would send information, processing and reading the data would take a significant ammount of time as well as system resources (Memory, CPU, etc..) making the displaying of data not quite real-time.
I wrote a report generation tool which takes the information for every day (from 00:00:00 to 23:59:59) and put it in an excel spreadsheet.
My questions are basically:
Is it posible to do the insertion/updating part directly in the database server or do I have to do the logic in the php script?
Is there a better (more efficient) way to store the information without a decrease in performance in the display device?
Regarding the report generation, if I want to sample intervals lets say starting from yesterday at 15:50:00 and ending today at 12:45:00 it cannot be done with my current data structure, so what do I need to consider in order to make a data structure which would allow me to create such queries.
The components I use:
- Apache 2.4.4
- PostgreSQL 9.2.3-2
- PHP 5.4.13
My recommendations - just store all the information, your devices are sending. With proper indexes and queries you can process and retrieve information from DB really fast.
For your questions:
Yes it is possible to build any logic you desire inside Postgres DB using SQL, PL/pgSQL, PL/PHP, PL/Java, PL/Py and many other languages built into Postgres.
As I said before - proper indexing can do magic.
If you cannot get desired query speed with full table - you can create a small table with 1 row for every device. And keep in this table last known values to show them in sort of real-time.
1) The technique is called upsert. In PG 9.1+ it can be done with wCTE (http://www.depesz.com/2011/03/16/waiting-for-9-1-writable-cte/)
2) If you really want it to be real-time you should be sending the data directly to the aplication, storing it in memory or plaintext file also will be faster if you only care about the last few values. But PG does have Listen/notify channels so probabably your lag will be just 100-200 mili and that shouldn't be much taken you're only displaying it.
I think you are overestimating the memory system requirements given the process you have described. Adding a row of data every second (or 11 per second) is not a hog of resources. In fact it is likely more time consuming to UPDATE vs ADD a new row. Also, if you add a TIMESTAMP to your table, sort operations are lightning fast. Just add some garbage collection handling as a CRON job (deletion of old data) once a day or so and you are golden.
However to answer your questions:
Is it posible to do the insertion/updating part directly in the database server or do I >have to do the logic in the php script?
Writing logic from with the Database engine is usually not very straight forward. To keep it simple stick with the logic in the php script. UPDATE (or) INSERT INTO table SET var1='assignment1', var2='assignment2' (WHERE id = 'checkedID')
Is there a better (more efficient) way to store the information without a decrease in >performance in the display device?
It's hard to answer because you haven't described the display device connectivity. There are more efficient ways to do the process however none that have locking mechanisms required for such frequent updating.
Regarding the report generation, if I want to sample intervals lets say starting from >yesterday at 15:50:00 and ending today at 12:45:00 it cannot be done with my current data >structure, so what do I need to consider in order to make a data structure which would >allow me to create such queries.
You could use the a TIMESTAMP variable type. This would include DATE and TIME of the UPDATE operation. Then it's just a simple WHERE clause using DATE functions within the database query.

MySQL and UNIX_TIMESTAMP insert error

I have a problem with a project I am currently working on, built in PHP & MySQL. The project itself is similar to an online bidding system. Users bid on a project, and they get a chance to win if they follow their bid by clicking and cliking again.
The problem is this: if 5 users for example, enter the game at the same time, I get a 8-10 seconds delay in the database - I update the database using the UNIX_TIMESTAMP(CURRENT_TIMESTAMP), which makes the whole system of the bids useless.
I want to mention too that the project is very database intensive (around 30-40 queries per page) and I was thinking maybe the queries get delayed, but I'm not sure if that's happening. If that's the case though, any suggestions how to avoid this type of problem?
Hope I've been at least clear with this issue. It's the first time it happened to me and I would appreciate your help!
You can decide on
Optimizing or minimizing required queries.
You can cache queries do not need to update on each visit.
You can use Summery tables
Update the queries only on changes.
You have to do this cleverly. You can follow this MySQLPerformanceBlog
I'm not clearly on what you're doing, but let me elaborate on what you said. If you're using UNIX_TIMESTAMP(CURRENT_TIMESTAMP()) in your MySQL query you have a serious problem.
The problem with your approach is that you are using MySQL functions to supply the timestamp record that will be stored in the database. This is an issue, because then you have to wait on MySQL to parse and execute your query before that timestamp is ever generated (and some MySQL engines like MyISAM use table-level locking). Other engines (like InnoDB) have slower writes due to row-level locking granularity. This means the time stored in the row will not necessarily reflect the time the request was generated to insert said row. Additionally, it can also mean that the time you're reading from the database is not necessarily the most current record (assuming you are updating records after they were inserted into the table).
What you need is for the PHP request that generates the SQL query to provide the TIMESTAMP directly in the SQL query. This means the timestamp reflects the time the request is received by PHP and not necessarily the time that the row is inserted/updated into the database.
You also have to be clear about which MySQL engine you're table is using. For example, engines like InnoDB use MVCC (Multi-Version Concurrency Control). This means while a row is being read it can be written to at the same time. If this happens the database engine uses something called a page table to store the existing value that will be read by the client while the new value is being updated. That way you have guaranteed row-level locking with faster and more stable reads, but potentially slower writes.

Two Way Sync Logic

I have a CSV file with information about our inventory that gets changed locally and then uploaded to my web server at night. The website also has a copy of the inventory information in its MySQL database that might have also changed.
What I want to accomplish is a two-way sync between in the inventory information in the database and the CSV file that's uploaded. Parsing the CSV and extracting the info from the database isn't a problem, but now that I have the two sets of data, I'm struggling to figure out how to sync them.
If a record is different between the CSV and database, how do I know which one to use? I really don't want to resort to having my users time stamp every change they make on the CSV. Is there some way I can tell which information is more current?
Any help is greatly appreciated.
P.S. Just in case you're wondering, I tagged this question PHP because that's the language I'll be using to accomplish the synching.
You should create a time stamp field. And have an application that updates the timestamp overtime the record changes.
I have a similar app done before where multiple sites sync records up and down based on 3 time stamp. One to track when the record was last updated. One to track when the record was deleted. And one to track when the changes was copied to this pc.
Then on every pc, i also track when was the last time the records was.synchronized with each other pc.
This way, the latest record can always be propogated to all the pc.
This is more of a versioning issue. A simple solution would be to compare all 'lines' or 'records' (if you have unique identifiers) and ask the user to pick the right values.

Categories