Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am trying to build an application that will query a db, and send data somewhere as it comes into the database.
I can get the data I want from the database using this code:
$sql="SELECT * FROM `signals` order by `time` DESC LIMIT 100";
$result = mysqli_query($DatabasePointer,$sql)
or die(mysqli_error($DatabasePointer));
$row_cnt = mysqli_num_rows($result);
if($row_cnt>0)
{
$array = array();
$i=0;
while($row = mysqli_fetch_array($result))
{
$i++;
//$a = array();
$epoch = $row['time'];
// convert UNIX timestamp to PHP DateTime
$dt = new DateTime("#$epoch");
if(
($row['symbol'] === 'USDJPYecn')
|| ($row['symbol'] === 'USDCADecn')
|| ($row['symbol'] === 'EURUSDecn')
)
{
if(
($row['timeframe'] === 'M5')
|| ($row['timeframe'] === 'M15')
)
{
$a = array(
"time" => $dt->format('Y-m-d H:i:s'),
"signal" => $row['signal'],
"symbol" => $row['symbol'],
"price" => $row['price'],
"timeframe" => $row['timeframe'],
"epoch" => $row['time'],
"candel" => $row['candel']
);
$array[] = $a;
}
}
} // while
echo json_encode($array, JSON_UNESCAPED_SLASHES);
}
However, I am not sure how to revise the code to check to see if the data is new, or has already been sent out to another source. I am also unsure how to revise the code to only send new data as it hits the db, not an entire array of data like I am calling now.
Can somebody please point me in the right direction?
EDIT:
$sql="SELECT * FROM `tdisignals` order by `time` DESC LIMIT 100";
$result = mysqli_query($DatabasePointer,$sql)
or die(mysqli_error($DatabasePointer));
$row_cnt = mysqli_num_rows($result);
if($row_cnt>0)
{
$array = array();
$i=0;
while($row = mysqli_fetch_array($result))
{
$i++;
//$a = array();
$epoch = $row['time'];
// convert UNIX timestamp to PHP DateTime
$dt = new DateTime("#$epoch");
if(
$row['symbol'] === 'USDJPYecn'
|| ($row['symbol'] === 'USDCADecn')
|| ($row['symbol'] === 'GBPUSDecn'))
{
if(
$row['timeframe'] === 'M5')
|| ($row['timeframe'] === 'M15'))
{
$a = array(
"time" => $dt->format('Y-m-d H:i:s'),
"signal" => $row['signal'],
"symbol" => $row['symbol'],
"price" => $row['price'],
"timeframe" => $row['timeframe'],
"epoch" => $row['time'],
"candel" => $row['candel'],
);
$array[] = $a;
}
}
}
// echo json_encode($array, JSON_UNESCAPED_SLASHES);
$fuegostore = json_encode($array, JSON_UNESCAPED_SLASHES);
// $sql2 = "INSERT INTO fuegosync (time, lastsync) ".
// "VALUES ('$date', '$fuegostore')";
// $result2 = mysqli_query($DatabasePointer,$sql2)
// or die(mysqli_error($DatabasePointer));
$sql3="SELECT lastsync, MAX(CAST(time AS CHAR)) FROM `fuegosync`";
$result3 = mysqli_query($DatabasePointer,$sql3)
or die(mysqli_error($DatabasePointer));
$row2 = mysqli_fetch_row($result3);
if($row2[0] === $fuegostore)
echo 'No New Signals';
else
echo 'New Signals';
///OPTION 1:
//print_r (json_encode($array[0], JSON_UNESCAPED_SLASHES));
//OPTION 2:
foreach($array as $x) {
if(strtotime($array[0]['time']) >= $row2[1]) {
echo '<br /><span>'.$x['signal'].' - '.$x['symbol'].' - '.$x['price'].'<br />';
} else {
echo 'No New Signals';
}
}
echo $row2[0];
}
This code is successfully detecting a new data hitting the database. What I am struggling with now, is revising the code to only display the newly detected data piece, not the entire array as you see it.
NEW EDIT:
I have gotten the code to display only the newest piece of data, but now I have a conundrum.
If I poll the database say every minute, and a new piece of data hits the db, the script will pick it up -- however if another new piece of data hits the db seconds after the first new piece was sent to target, the second new piece will totally get ignored because the poll would be every minute. I would basically have to poll the database every few seconds... and that sounds like a performance nightmare...
The grayed out OPTION 1 is what displays the newest data, but would skip a newer piece before a minute based poll unless the db was polled every second.
OPTION 2 works, but displays the entire array... so I am unsure how to revise the code to only display the newest pieces, not the entire thing.
Any suggestions?
Secured
One of the most secure way to do this kind of thing is :
having an incremented field on source
allowing a query on target
Step-by-step, source driven
you add some data in source, with their auto-incremented id
you query your target from source and ask for the last id know
with this id, from source, you get all new record, and query with these data an insert page on target
Alternate, target driven
you add some data in source, with their auto-incremented id
your target get his bigger id and ask source for new data
target update himself
And you can go to step one again. If you are careful on your insert (use of roll-back, break full batch on one fail), you should have a perfect target whenever the fail on the source / target link, and the bandwidth is as low as it could.
One-sided and check-less
This allow to send batch of data from source without answer nor action of target.
This don’t care if the data is lost on the way, it only send once.
having a three states field send on source
Step-by-step
you add some data in source, with their send by default on 0
you set every send ==0 on send = -1
select every -1 and send them to the target
update -1 to 1
Go back to step one.
This allow you big batch without having to put a lock write waiting the send script, and be sure you can't have some line that drop between the send one.
Timestamp
This look a lot like the previous one, but instead of a field on every row, we just use a new table to keep the last sync :
Step-by-step
you add some data in source, with their timestamp
you get the current timestamp - 1 (one second before now)
you get the last sync timestamp (or 0 if it's your first sync)
select and send lines where timestampOfPost <= timestamp-1 and timestampOfPost > timestampLastSync
update your last sync timestamp with the timestamp - 1 of point 2.
Timestamp can be tricky if you don't use the "go 1seconde back in time" and keep it in a variable, as you can lose some update :
If you send at ***754(.1)s, every line from ***754(.2)s to (.9)s will be see as send as we have done the timestamp ***754 and will start the next send at ***755.
Related
Unfortunately I can't show you the code but I can give you an idea of what it looks like, what it does and what the problem I have...
<?php
include(db.php);
include(tools.php);
$c = new GetDB(); // Connection to DB
$t = new Tools(); // Classes to clean, prevent XSS and others
if(isset($_POST['var'])){
$nv = json_decode($_POST['var'])
foreach($nv as $k) {
$id = $t->clean($k->id);
// ... goes on for about 10 keys
// this might seems redundant or insufficient
$id = $c->real_escape_string($id);
// ... goes on for the rest of keys...
$q = $c->query("SELECT * FROM table WHERE id = '$id'");
$r = $q->fetch_row();
if ($r[1] > 0) {
// Item exist in DB then just UPDATE
$q1 = $c->query(UPDATE TABLE1);
$q4 = $c->query(UPDATE TABLE2);
if ($x == 1) {
$q2 = $c->query(SELECT);
$rq = $q2->fetch_row();
if ($rq[0] > 0) {
// Item already in table just update
$q3 = $c->query(UPDATE TABLE3);
} else {
// Item not in table then INSERT
$q3 = $c->query(INSERT TABLE3);
}
}
} else {
// Item not in DB then Insert
$q1 = $c->query(INSERT TABLE1);
$q4 = $c->query(INSERT TABLE2);
$q3 = $c->query(INSERT TABLE4);
if($x == 1) {
$q5 = $c->query(INSERT TABLE3);
}
}
}
}
As you can see is a very basic INSERT, UPDATE tables script, so before we release to full production we did some test to see that the script is working as it should, and the "result" where excellent...
So, we ran this code against 100 requests, everything when just fine... less than 1.7seconds for the 100 requests... but then we saw the amount of data that needed to be send/post it was a jaw drop for me... over 20K items it takes about 3 to 5min to send the post but the script always crash the "data" is an array in json
array (
[0] => array (
[id] => 1,
[val2] => 1,
[val3] => 1,
[val4] => 1,
[val5] => 1,
[val6] => 1,
[val7] => 1,
[val8] => 1,
[val8] => 1,
[val9] => 1,
[val10] => 1
),
[1] => array (
[id] => 2,
[val2] => 2,
[val3] => 2,
[val4] => 2,
[val5] => 2,
[val6] => 2,
[val7] => 2,
[val8] => 2,
[val8] => 2,
[val9] => 2,
[val10] => 2
),
//... about 10 to 20K depend on the day and time
)
but in json... any way, sending this information is not a problem, like I said it can take about 3 to 5mins the problem is the code that does the job receiving the data and do the queries... in a normal shared hosting we get a 503 error which by doing a debug it turn out to be a time out, so for our VPS we can increment the max_execution_time to whatever we need to, to process 10K+ it takes about 1hr in our VPS, but in a shared hosting we can't use max_execution_time... So I ask the other developer the one that is sending the information that instead of sending 10K+ in one blow to send a batch of 1K and let it rest for a second then send another batch..and so on ... so far I haven't got any answer... so I was thinking to do the "pause" on my end, say, after process 1K of items wait for a sec then continue but I don't see it as efficient as receiving the data in batches... how would you solve this?
Sorry, I don't have enough reputation to comment everywhere, yet, so I have to write this in an answer. I would recommend zedfoxus' method of batch processing above. In addition, I would highly recommend figuring out a way of processing those queries faster. Keep in mind that every single PHP function call, etc. gets multiplied by every row of data. Here are just a couple of the ways you might be able to get better performance:
Use prepared statements. This will allow MySQL to cache the memory operation for each successive query. This is really important.
If you use prepared statements, then you can drop the $c->real_escape_string() calls. I would also scratch my head to see what you can safely leave out of the $t->clean() method.
Next I would evaluate the performance of evaluating every single row individually. I'd have to benchmark it to be sure, but I think running a few PHP statements beforehand will be faster than making umpteen unnecessary MySQL SELECT and UPDATE calls. MySQL is much faster when inserting multiple rows at a time. If you expect multiple rows of your input to be changing the same row in the database, then you might want to consider the following:
a. Think about creating a temporary, precompiled array (depending on memory usage involved) that stores the unique rows of data. I would also consider doing the same for the secondary TABLE3. This would eliminate needless "update" queries, and make part b possible.
b. Consider a single query that selects every id from the database that's in the array. This will be the list of items to use an UPDATE query for. Update each of these rows, removing them from the temporary array as you go. Then, you can create a single, multi-row insert statement (prepared, of course), that does all of the inserts at a single time.
Take a look at optimizing your MySQL server parameters to better handle the load.
I don't know if this would speed up a prepared INSERT statement at all, but it might be worth a try. You can wrap the INSERT statement within a transaction as detailed in an answer here: MySQL multiple insert performance
I hope that helps, and if anyone else has some suggestions, just post them in the comments and I'll try to include them.
Here's a look at the original code with just a few suggestions for changes:
<?php
/* You can make sure that the connection type is persistent and
* I personally prefer using the PDO driver.
*/
include(db.php);
/* Definitely think twice about each tool that is included.
* Only include what you need to evaluate the submitted data.
*/
include(tools.php);
$c = new GetDB(); // Connection to DB
/* Take a look at optimizing the code in the Tools class.
* Avoid any and all kinds of loops–this code is going to be used in
* a loop and could easily turn into O(n^2) performance drain.
* Minimize the amount of string manipulation requests.
* Optimize regular expressions.
*/
$t = new Tools(); // Classes to clean, prevent XSS and others
if(isset($_POST['var'])){ // !empty() catches more cases than isset()
$nv = json_decode($_POST['var'])
/* LOOP LOGIC
* Definitely test my hypothesis yourself, but this is similar
* to what I would try first.
*/
//Row in database query
$inTableSQL = "SELECT id FROM TABLE1 WHERE id IN("; //keep adding to it
foreach ($nv as $k) {
/* I would personally use specific methods per data type.
* Here, I might use a type cast, plus valid int range check.
*/
$id = $t->cleanId($k->id); //I would include a type cast: (int)
// Similarly for other values
//etc.
// Then save validated data to the array(s)
$data[$id] = array($values...);
/* Now would also be a good time to add the id to the SELECT
* statement
*/
$inTableSQL .= "$id,";
}
$inTableSQL .= ");";
// Execute query here
// Then step through the query ids returned, perform UPDATEs,
// remove the array element once UPDATE is done (use prepared statements)
foreach (.....
/* Then, insert the remaining rows all at once...
* You'll have to step through the remaining array elements to
* prepare the statement.
*/
foreach(.....
} //end initial POST data if
/* Everything below here becomes irrelevant */
foreach($nv as $k) {
$id = $t->clean($k->id);
// ... goes on for about 10 keys
// this might seems redundant or insufficient
$id = $c->real_escape_string($id);
// ... goes on for the rest of keys...
$q = $c->query("SELECT * FROM table WHERE id = '$id'");
$r = $q->fetch_row();
if ($r[1] > 0) {
// Item exist in DB then just UPDATE
$q1 = $c->query(UPDATE TABLE1);
$q4 = $c->query(UPDATE TABLE2);
if ($x == 1) {
$q2 = $c->query(SELECT);
$rq = $q2->fetch_row();
if ($rq[0] > 0) {
// Item already in table just update
$q3 = $c->query(UPDATE TABLE3);
} else {
// Item not in table then INSERT
$q3 = $c->query(INSERT TABLE3);
}
}
} else {
// Item not in DB then Insert
$q1 = $c->query(INSERT TABLE1);
$q4 = $c->query(INSERT TABLE2);
$q3 = $c->query(INSERT TABLE4);
if($x == 1) {
$q5 = $c->query(INSERT TABLE3);
}
}
}
}
The key is to minimize queries. Often, where you are looping over data doing one or more queries per iteration, you can replace it with a constant number of queries. In your case, you'll want to rewrite it into something like this:
include(db.php);
include(tools.php);
$c = new GetDB(); // Connection to DB
$t = new Tools(); // Classes to clean, prevent XSS and others
if(isset($_POST['var'])){
$nv = json_decode($_POST['var'])
$table1_data = array();
$table2_data = array();
$table3_data = array();
$table4_data = array();
foreach($nv as $k) {
$id = $t->clean($k->id);
// ... goes on for about 10 keys
// this might seems redundant or insufficient
$id = $c->real_escape_string($id);
// ... goes on for the rest of keys...
$table1_data[] = array( ... );
$table2_data[] = array( ... );
$table4_data[] = array( ... );
if ($x == 1) {
$table3_data[] = array( ... );
}
}
$values = array_to_sql($table1_data);
$c->query("INSERT INTO TABLE1 (...) VALUES $values ON DUPLICATE KEY UPDATE ...");
$values = array_to_sql($table2_data);
$c->query("INSERT INTO TABLE2 (...) VALUES $values ON DUPLICATE KEY UPDATE ...");
$values = array_to_sql($table3_data);
$c->query("INSERT INTO TABLE3 (...) VALUES $values ON DUPLICATE KEY UPDATE ...");
$values = array_to_sql($table4_data);
$c->query("INSERT IGNORE INTO TABLE4 (...) VALUES $values");
}
While your original code executed between 3 and 5 queries per row of your input data, the above code only executes 4 queries in total.
I leave the implementation of array_to_sql to the reader, but hopefully this should explain the idea. TABLE4 is an INSERT IGNORE here since you didn't have an UPDATE in the "found" clause of your original loop.
I am trying to compare a stored date to the current date, and send back data accordingly. I'm not fluent in PHP so this is what I have so far
while ($row = mysql_fetch_array($result)) {
// temp user array
$event_date_str = $row["date"];
$todays_date_str = date("m-j-Y");
$today = strtotime($todays_date_str);
$event_date = strtotime($event_date_str);
if($event_date > $today){
$event = array();
$event["pid"] = $row["pid"];
$event["name"] = $row["name"];
$event["longitude"] = $row["longitude"];
$event["latitude"] = $row["latitude"];
$event["pavement"] = $row["pavement"];
$event["traffic"] = $row["traffic"];
$event["environment"] = $row["environment"];
$event["image_b64"] = $row["image_b64"];
$event["created_at"] = $row["created_at"];
$event["date"] = $row["date"];
$event["time"] = $row["time"];
$event["type"] = $row["type"];
// push single product into final response array
array_push($response["events"], $event);
}else{
//delete it here
}
With that code right there I am getting nothing back. I have 2 items stored in the database, One with the date "2-28-2014" and another with "2-14-2014", so I should be getting one back and not the other but I am getting neither back. I know that there are no leading zeros with the dates saved so I should use j right? Could someone help me figure out why this is not working, sorry if it seems like a simple question.
Thank you in advance,
Tyler
This is not an efficient way to do things. If you need to pick items based on date, do it in the MySQL query directly. PHP filtering will always be slower than MySQL. Especially since you have to deliver extra data over network when filtering at PHP level.
So do it like this:
SELECT * FROM `table` WHERE `record_expires_datetime_gmt` > UTC_TIMESTAMP();
SELECT * FROM `table` WHERE `record_expires_date_gmt` > DATE(UTC_TIMESTAMP());
// use NOW() for local time, UTC_TIMESTAMP() is GMT/UTC
Then do what you need to do with the records. Never SELECT * and then filter records in PHP.
There's a whole set of DATETIME functions in MySQL to allow you MySQL server side filtering of data.
PS: Obviously, for this method to work, your MySQL table has to be properly designed. Date (date and time) fields need to be of type DATE or DATETIME not surrogate strings that are meaningful only within your project.
My code:
$query = "INSERT IGNORE INTO `user_app` (app_id,imei, package_name, sdk_name, sdk_version, app_version)
VALUES
('".$appid."','".$imei."', '".$pkg."', '".$sdkn."', '".$sdkv."', '".$appv."')";
$mysqli -> query($query);
$id = $mysqli -> insert_id ; //get last user insert id
$idt = date('G:i:s', time());
$new = strtotime($idt);
include('requestad.php');
When a new user registered, he'll get an ad from requestad.php in json format. Insert id is save in a separate variable named $id, if a user again hit via application (as application invoke after every 30min ) then he'll get again json ad. I am trying to do some stuff like user get ad only once in whole 24hours, this is possible with insert id and insert time stamp. I am doing something like that:
if ( $new == time() ) {
include('requestad.php');
} elseif ( $new < time() ) {
echo 'nothing';
}
But problem is i didn't save exact execution time in variable and save time is necessary for comparison. Also, i have to send some blank to user if he again request for resource. Pl have a look on this and help me to produce optimal solution.
Still i didn't apply any logic yet. I can achieve this through store time which is static and compare it to time() which shows real time. Still i am looking this one
$store_time=$row['time'];//pick up the store time of user
$store_time=strtotime($row['time']);//and convert it in strtotime.if alredy did no need to do this.
$new=strtotime("-24 hours",strtotime(time()));//substract the time to -24 hours.
if($new==$store_time)
{
//your code
}
else
{
//your code
}
What this code does is taking links from the db and compare it to a keyword, if it compares then KeywordCounter++, and in every time LinkCounter++
I want to type LinkCounter after every link it goes through but in the code I wrote it only shows me after the loop ends (after all the links crosses). How can I see the LinkCounter every time a link is checked?
How will I be able to see live the counter jumps?
<?php //holdes the db connection include('Connect.php');
$KeyWord = 'googoo';
$LinkCounter = "0";
$KeywordCounter = "0";
$query = mysql_query("SELECT * FROM doalgo where Pass != '0'") or die(mysql_error());
while ($info = mysql_fetch_array($query)) {
$id = $info['id'];
$link = $info['link'];
$num_rows = mysql_num_rows($query);
mysql_query("UPDATE doalgo SET Pass = '1' WHERE id = '$id'");
$CurrentFile = file_get_contents($link);
if (!strpos($CurrentFile, $KeyWord)) {
//nothing
} else {
mysql_query("UPDATE doalgo SET Posted = '1' WHERE id = '$id'");
$KeywordCounter++;
}
$LinkCounter++;
if ($id == $num_rows) {
die();
}
}
echo "<br />KeywordCounter: ".$KeywordCounter;
echo "<br />LinkCounter: ".$LinkCounter;
? >
Its better you calculate the average speed of update (for example number of updates per hour) and send just a single integer to the browser every 1 hour.
using jquery you can change the value shown to user with that speed.
If I understand your question correctly, you want the web page to display immediately, then constantly update the LinkCounter display as the SQL queries progress?
If this is a correct understanding, to do this requires AJAX. Your server has to send constant updates to the web browser every time $LinkCounter is updated, then the JavaScript running in the browser will update the display with that information. Obviously, it's a much more complicated thing to do than what your script currently does. It's an entirely different design pattern.
If this is truly something you want to learn to do, there are many books on the subject of AJAX, or google can help you, too.
I am trying to implement a check in my PHP code, that checks if there is a duplicate uid in the database, and if so, to assign a new uid, and check again, but I am having trouble nailing the logic, here is what I have thus far,
function check($uid){
$sql = mysql_query("SELECT * FROM table WHERE uid='$uid'");
$pre = mysql_num_rows($sql);
if($pre >= 1){
return false;
}else{
return true;
}
}
And then using that function I thought of using a while loop to continue looping through until it evaluates to true
$pre_check = check($uid);
while($pre_check == false){
//having trouble figuring out what should go here
}
So basically, once I have a usable uid, write everything to the database, else keep generating new ones and checking them till it finds one that is not already in use.
It is probably really simple, but for some reason I am having trouble with it.
Thanx in advance!
$uid = 100; // pick some other value you want to start with or have stored based on the last successful insert.
while($pre_check == false){
$pre_check = check(++$uid);
}
Of course ths is exactly what 'auto incrementing' primary keys are useful for. Are you aware of 'auto incrementing' primary keys in mysql?
EDIT
In light of your comment regarding maintaining someone else's code that uses the random function like that (ewwwww)... I would use the method I suggest above and store the last inserted id somewhere you can read it again for the next user. This will allow you to "fill-in-the-blanks" for the uids that are missing. So, if for example you have uids 1, 2, 5, 9, 40, 100... you can start with $uid = 1; Your while loop will return once you get to 3. Now you store the 3 and create the new record. Next time, you start with $uid = 3; and so on. Eventually you will have all numbers filled in.
It is also important to realize that you will need to do the inserts by either locking the tables for WRITES. You don't want to get into a race condition where two different users are given the same uid because they are both searching for an available uid at the same time.
Indeed the best is to use autoincrement ids, but if you don't have the choice, you can do a reccursive function like that:
function find_uid() {
$new_uid = rand(1000000000, 9999999999);
$sql = mysql_query("SELECT COUNT(*) AS 'nb' WHERE uid=".$new_uid.";");
$row = mysql_fetch_assoc();
$pre = $row['nb'];
return ($pre >= 1 ? find_uid() : $new_uid);
}
COUNT(*) should be more performant because the count is made by MySQL and not php.
By the way, if you need a new uid shouldn't the condition be ($pre > 0) instead of ($pre > 1) ?