I have an app that communicates with my API that runs php and mysql.
What I wanted to do was record changes that occur to entities in my table for each user. If a user makes a change to their data, I can see the change that occurred. This way if they ever have questions or accidentally delete something, I can go back and tell them what the entities looked like at various stages in the year.
I don't need to be crazy specific about the differences, all I would like to do is record inserts or updates (as it's represented in a JSON body).
Basically what I did for now was any time a POST/PUT occurs to my API for certain routes, I just take the JSON in the request body, and I save it to a record in the database as a transaction that took place for that user.
This was great early on, but after hundreds of thousands of records, the JSON body is large and is taking up a lot of room. My database table is 13GB. Queries take a while to run, too. I truncated it, but within 4 months it grew again to another 10GB. This problem will likely only get larger.
Is there an approach someone can recommend to record this? Can I maybe send the request body over to something on AWS or some other storage offline or another database somewhere else? Flat files perhaps or a non-relational database? It's not like I actually need the data in real time but if I ever wanted to get a history of someone I'd like to know I could.
I do take nightly backups of the DB, so an alternate approach was I was thinking of cutting out the transaction logs entirely, and instead just letting it continue to back up nightly. Sure, I won't be able to show a history of what dates entities were updated/added, but at least I could always reference a few backups to see what records were for a given user on a certain date after I do a restore.
Any ideas or suggestions? Thanks!
Instead of logging the entire JSON, you can just log the values that have changed and you also don't have to log your insert data as your database will always have the current record and logging the insert data is redundant.
You can implement a Diff function to compare difference in your existing JSON to the changed JSON.
To illustrate an example see the code below that borrows a JavaScript Diff function from this Answer.
// get the current value from your database
var oldvalues = {
"id": 50,
"name": "Old Name",
"description": "Description",
"tasks": [{
'foo': 'bar'
}]
};
var newvalues = {
"id": 50,
"name": "New name",
"description": "Description",
"tasks": [{
'foo': 'bar'
}]
};
var isEmptyObject = function(obj) {
var name;
for (name in obj) {
return false;
}
return true;
};
var diff = function(obj1, obj2) {
var result = {};
var change;
for (var key in obj1) {
if (typeof obj2[key] == 'object' && typeof obj1[key] == 'object') {
change = diff(obj1[key], obj2[key]);
if (isEmptyObject(change) === false) {
result[key] = change;
}
}
else if (obj2[key] != obj1[key]) {
result[key] = obj2[key];
}
}
return result;
};
var update = diff(oldvalues, newvalues);
//save this to your database
$('#diff').text(JSON.stringify(update));
textarea {
width: 400px;
height: 50px
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<textarea id="diff"></textarea>
As you can see only the only change that would be saved is {"name":"New name"} which will cut down on your data usage.
You would of course need to either port this PHP or look at some existing packages such as node-rus-diff
that might serve your needs.
As long as you are keeping a timestamp or a sequence number you can chain multiple transactions to rollback to any prior state. This is analogous to doing an incremental backup.
You could also run a maintenance task at set intervals if you would like to create checkpoints and compare a current state to a previous state. Perhaps once a month take a back up and record the differences between objects that have changed. This would be analogous to a differential backup.
Finally, you can take a full back up and clear out out the previous transactions, analogous to a full back up.
It is common practice for administrators to perform a combination of incremental, differential and full backups to balance storage costs and recovery needs. Using these approaches outline above you can implement the strategy that is right for you.
Related
In trying to rejuvinate code I wrote mostly 14+ years ago. I've come to see that the lovely little setup I wrote then was... lacking in certain places, namely handling user inputs.
Lesson: Never underestimate users ability to inject trash, typos, and dupes past your validators.
The old way is reaching critical mass as there are 470 items in a SELECT dropdown now. I want to reinvent this part of the process so I don't have to worry about it hitting a breaking point.
So the idea is to build a fuzzy search method so that after the typist enters the search string, we check against five pieces of data, all of which reside in the same row.
I need to check the name submitted against the Stage Name, two also-known-as names, as well as their legal name and as a final check against a soundex() index based on their Stage Name (this catches a few spelling errors missed otherwise)
I've tried a complicated block of code to check these things (and it doesn't work, mostly because I think I coded the comparisons too strict) as part of a do/while loop.
In the below, var $Rin would contain the user supplied name.
$setr = mysql_query("SELECT ID,StageName,AKA1,AKA2,LegalName,SoundEx FROM performers");
IF ($R = mysql_fetch_array($setr)) {
do {
$RT = substr(trim($Rin), 5);
$RT1 = substr($R[1], 5);
$RT2 = substr($R[2], 5);
$RT3 = substr($R[3], 5);
$RT4 = substr($R[4], 5);
$RTx = soundex($RT);
IF ($RT == $RT1) {
$RHits[] = $R[0];
}
IF ($RT == $RT2) {
$RHits[] = $R[0];
}
IF ($RT == $RT3) {
$RHits[] = $R[0];
}
IF ($RT == $RT4) {
$RHits[] = $R[0];
}
IF ($RTx == $R[5]) {
$RHits[] = $R[0];
}
} while ($R = mysql_fetch_array($setr));
}
The idea being that I'll build an array of the ID#'s of the near hits, which I'll populate into a select dropdown that has only hopefully fewer hits that the whole table. Which means querying for a result set from the contents of that array, in order to display the Performer's name in the SELECT dropdown and pass the ID# as the value for those choices.
Thats when I hit the 'I need to use an array in my WHERE clause' problem, and after finding that answer, I am starting to suspect I'm out of luck due to Stipulation #2 below. So I started looking at alternate search methods and I'm not sure I've gotten anywhere but more confused.
So, is there a better way to scan a single table for six fields, checking five against user input and noting the sixth for display in a subset of the original table?
Thought process:
Against the whole table, per record, test $Rin against these tests in this order:
$Rin -> StageName
$Rin -> AKA1
$Rin -> AKA2
$Rin -> LegalName
soundex($Rin) -> SoundEx
where a hit on any of the five operations adds the ID# to a result array that is used to narrow the results from 470 performers down to a reasonable list to choose from.
Stipulations:
1) As written, I know this is vulnerable to an SQL injection attack.
2) Server runs PHP 4.4.9 and MySQL 4.0.27-Standard, I can't upgrade it. I've got to prove it works before money will be spent.
3) This is hobby-level stuff, not my day job.
4) Performers often use non-English names or elements in their names, and this has led to typos and duplication by the data entry typists.
I've found a lot of mysqli and PDO answers for this sort of thing, and I'm seeing a lot of things that only half make sense (like link #4 below). I'm working on getting up to speed on these things as I try and fix whats become broken.
Places already looked:
PHP mysql using an array in WHERE clause
PHP/MySQL small-scale fuzzy search
MySQL SubString Fuzzy Search
Sophisticated Name Lookup
I mentioned in the comments that a Javascript typeahead library might be a good choice for you. I've found Twitter's Typeahead library and Bloodhound engine to be pretty robust. Unfortunately, the documentation is a mixed bag: so long as what you need is very similar to their examples, you're golden, but certain details (explanations of the tokenizers, for example) are missing.
In one of the several questions re Typeahead here on Stack Overflow, #JensAKoch says:
To be honest, I think twitter gave up on typeahead.js. We look at 13000 stars, a full bugtracker with no maintainer and a broken software, last release 2015. I think that speaks for itself, or not? ... So, try one of the forks: github.com/corejavascript/typeahead.js
Frankly, in a brief check, the documentation at the fork looks a bit better, if nothing else. You may wish to check it out.
Server-side code:
All of the caveats of using an old version of PHP apply. I highly recommend retooling to use PDO with PHP 5, but this example uses PHP 4 as requested.
Completely untested PHP code. json_encode() would be better, but it doesn't appear until PHP 5. Your endpoint would be something like:
headers("Content-Type: application/json");
$results = mysql_query(
"SELECT ID,StageName,AKA1,AKA2,LegalName,SoundEx FROM performers"
);
$fields = array("ID","StageName","AKA1","AKA2","LegalName","SoundEx");
echo "[";
$first = true;
while ($row = mysql_fetch_array($results)) {
($first) ? $first = false : echo ',';
echo "\n\t,{";
foreach($fields as $f) {
echo "\n\t\t\"{$f}\": \"".$row[$f]."\"";
}
echo "\n\t}";
}
echo "]";
Client-side code:
This example uses a static JSON file as a stub for all of the results. If you anticipate your result set going over 1,000 entries, you should look into the remote option of Bloodhound. This would require you to write some custom PHP code to handle the query, but it would look largely similar to the end point that dumps all (or at least your most common) data.
var actors = new Bloodhound({
// Each row is an object, not a single string, so we have to modify the
// default datum tokenizer. Pass in the list of object fields to be
// searchable.
datumTokenizer: Bloodhound.tokenizers.obj.nonword(
'StageName','AKA1','AKA2','LegalName','SoundEx'
),
queryTokenizer: Bloodhound.tokenizers.whitespace,
// URL points to a json file that contains an array of actor JSON objects
// Visit the link to see details
prefetch: 'https://gist.githubusercontent.com/tag/81e4450de8eca805f436b72e6d7d1274/raw/792b3376f63f89d86e10e78d387109f0ad7903fd/dummy_actors.json'
});
// passing in `null` for the `options` arguments will result in the default
// options being used
$('#prefetch .typeahead').typeahead(
{
highlight: true
},
{
name: 'actors',
source: actors,
templates: {
empty: "<div class=\"empty-message\">No matches found.</div>",
// This is simply a function that accepts an object.
// You may wish to consider Handlebars instead.
suggestion: function(obj) {
return '<div class="actorItem">'
+ '<span class="itemStageName">'+obj.StageName+"</span>"
+ ', <em>legally</em> <span class="itemLegalName">'+obj.LegalName+"</span>"
}
//suggestion: Handlebars.compile('<div><strong>{{value}}</strong> – {{year}}</div>')
},
display: "LegalName" // name of object key to display when selected
// Instead of display, you can use the 'displayKey' option too:
// displayKey: function(actor) {
// return actor.LegalName;
// }
});
/* These class names can me specified in the Typeahead options hash. I use the defaults here. */
.tt-suggestion {
border: 1px dotted gray;
padding: 4px;
min-width: 100px;
}
.tt-cursor {
background-color: rgb(255,253,189);
}
/* These classes are used in the suggestion template */
.itemStageName {
font-size: 110%;
}
.itemLegalName {
font-size: 110%;
color: rgb(51,42,206);
}
<script src="https://code.jquery.com/jquery-3.1.1.min.js"></script>
<script src="https://twitter.github.io/typeahead.js/releases/latest/typeahead.bundle.js"></script>
<p>Type something here. A good search term might be 'C'.</p>
<div id="prefetch">
<input class="typeahead" type="text" placeholder="Name">
</div>
For ease, here is the Gist of the client-side code.
I'm using the Netsuite PHP Toolkit to try to obtain a list of invoices for a customer. I can do the call (using a TransactionSearch) with no problem, but I'm struggling to understand how I'm supposed to get all details for an invoice - i.e. the invoice "header" details (e.g. grand total, currency, main menu line etc) as well as details for each line item (net value, taxable value, item etc).
I have tried a couple of approaches:
TransactionSearchAdvanced, with return columns specified and returnSearchColumns preference set to "false". This gives back all the separate lines (woo!) but things like currency and term aren't expanded out - you just get internalId specified and not the actual text (or the symbol). Also, with TSA, do you really have to specify every column you want? i.e. is the default really just an empty set of fields? Isn't there a way of just saying "give me all the details for all lines of each invoice?
TransactionSearch, with returnSearchColumns preference set to "true". This gives a list of single Invoice type records, with all the currency and term stuff correctly populated, but frustratingly, none of the individual line items. It's more of a summary.
So I am left with a couple of options, neither of which are very palatable, namely:
Do both calls for all invoices and combine the data. These searches take a long time (performance is another bugbear for me, so I really don't want to do this.
or
Figure out a way of requesting the data for terms, currency etc and also a way of obtaining invoice lines.
I have no idea how you're supposed to do this, and can't find anything on the internet about it. This is one of the worst interfaces I've used (and I've used some pretty bad ones).
Any help would be hugely appreciated.
Just like you I started out trying to do things with the Web Services API (aka SuiteTalk). Mostly it was an exercise in frustration because eventually what I found out was that I plain couldn't do what I wanted with them. That and the performance was pretty bad, which would have killed my project even if it had worked properly.
Like Faz, I've found it much easier and faster to use a combination of RESTlets and Saved Searches than deal with the web services framework.
Basically break your problem down into these parts:
Saved Search that returns the results that you want (keep track of the internal ID you'll need it later)
RESTlet it's just a Javascript file that defines the function you will use to return the results from the search
Client code to call the RESTlet and get the results.
Part I:
So the saved search is pretty straightforward. I'm going to assume you can make that happen and also that you can actually get all the fields you want in one place. That hasn't always been the case in my experience.
Part II:
The RESTlet involves a lot more steps even though it's really a very simple thing. What makes it complicated is getting it uploaded and deployed on your NetSuite site. If you don't already have the NetSuite IDE installed I highly recommend it if only to make deploying the scripts a little easier. The autocompletion and tooltips are extremely useful as well.
For instance here is code I use to get results from a search I cared about. This was adapted from some kind soul's posting somewhere on the internet but I forget where:
function getSearchResults(){
var max_rows = 1000;
var search_id = 1211;
var search = nlapiLoadSearch(null, search_id);
var results = search.runSearch();
var rows = [];
// add starting point for usage
var context = nlapiGetContext();
startingUsage = context.getRemainingUsage();
rows.push(["beginning usage", startingUsage]);
// now create the collection of result rows in 1000 row chunks
var index = 0;
do{
var chunk = results.getResults(index, index+1000);
if( ! chunk ) break;
chunk.forEach( function(row){
rows.push(row);
index++;
});
}while( chunk.length === max_rows);
// add a line that returns the remaining usage for this RESTlet
context = nlapiGetContext();
var remainingUsage = context.getRemainingUsage();
rows.push(["remaining usage",remainingUsage]);
// send back the rows
return rows;
}
This is where you get things primed by passing in your Saved Search Internal ID:
var search = nlapiLoadSearch(null, SEARCH_ID);
var resultSet = search.runSearch();
Then the code repeatedly calls getResults() to get chunks of 1000 results, this is a NetSuite limitation. Once you have this written you have to upload the script to NetSuite and configure and deploy it. The most important part is telling it what function to assign to each verb. In this case I assigned GET to execute the getSearchResults. There is a lot of work to do here, and I'm not going to type all of it out because it is worth your time to learn this part. At least enough to get the IDE to do it for you =D. You can read all about it in the "Introduction to RESTlets" guide.
Part III.
Client code can be in whatever you want that does REST the way you like to. Personally I like Python for this because the requests library is fantastic.
Here's some example Python code:
import requests
import json
url = 'https://rest.sandbox.netsuite.com/app/site/hosting/restlet.nl?script=123&deploy=1'
headers = {'Content-Type': 'application/json', 'Authorization':'NLAuth nlauth_account=1234567, nlauth_email=someone#somewhere.com, nlauth_signature=somepassword, nlauth_role=3'}
resp = requests.get(url, headers=headers)
data = resp.json()
The URL is going to be displayed to you as part of the deployment of the RESTlet. Then it's up to you to do what you want with the data that comes back.
So the things I would suggest you spend time with would be
Setting up the NetSuite IDE
Getting and reading the SuiteScript developer reference docs
Finding a good way to create REST client code in you language of choice.
I hope that helps.
I created a saved search in Netsuite and call that search using restlet. With this it is pretty lightweight and you can call the data as it is in the saved search.
Performance wise Restlet is much better than webservices.
Create a new suitelet script and deploy
Below script will give you invoice list by customer internal id
function customSearch(request, response) {
var rows = [];
var result;
var filters = [];
//9989 is customer internal id you can add more
// by pushing additional ids to array
filters.push(new nlobjSearchFilter('entity', null, 'anyOf', [9989] ));
var invoiceList = nlapiSearchRecord('invoice', null, filters, []);
// by default record limit is 1000
// taking 100 records
for (var i = 0; i < Math.min(100, invoiceList.length); i++)
{
if (parseInt(invoiceList[i].getId()) > 0) {
recordid = invoiceList[i].getId();
try {
result= nlapiLoadRecord(invoiceList[i].getRecordType(), recordid);
// pushing in to result
rows.push(result);
} catch (e) {
if (e instanceof nlobjError) {
nlapiLogExecution('DEBUG', 'system error', e.getCode() + '\n' + e.getDetails());
} else {
nlapiLogExecution('DEBUG', 'unexpected error', e.toString());
}
}
}
}
response.setContentType('JSON');
response.write(JSON.stringify({'records' : rows}));
return;
}
}
}
response.setContentType('JSON');
response.write(JSON.stringify({'records' : rows}));
return;
}
Here is what I have for getting a customer's invoices:
public function getCustomerInvoices($customer_id)
{
$service = new NetSuiteService($this->config);
$customerSearchBasic = new CustomerSearchBasic();
$searchValue = new RecordRef();
$searchValue->type = 'customer';
$searchValue->internalId = $customer_id;
$searchMultiSelectField = new SearchMultiSelectField();
setFields($searchMultiSelectField, array('operator' => 'anyOf', 'searchValue' => $searchValue));
$customerSearchBasic->internalId = $searchMultiSelectField;
$transactionSearchBasic = new TransactionSearchBasic();
$searchMultiSelectEnumField = new SearchEnumMultiSelectField();
setFields($searchMultiSelectEnumField, array('operator' => 'anyOf', 'searchValue' => "_invoice"));
$transactionSearchBasic->type = $searchMultiSelectEnumField;
$transactionSearch = new TransactionSearch();
$transactionSearch->basic = $transactionSearchBasic;
$transactionSearch->customerJoin = $customerSearchBasic;
$request = new SearchRequest();
$request->searchRecord = $transactionSearch;
$searchResponse = $service->search($request);
return $searchResponse->searchResult->recordList;
}
I have implemented a basic auto-complete feature using jQuery autocomplete. I am querying DB every time which is making auto-complete thing quite slow. I am looking for ways to make it faster much like Quora.
Here is the code from front-end:
<script type="text/javascript">
var URL2 = '<?php e(SITE_URL); ?>fronts/searchKeywords';
jQuery(document).ready(function(){
var CityKeyword = jQuery('#CityKeyword');
CityKeyword.autocomplete({
minLength : 1,
source : URL2
});
});
</script>
Here is the code from server side:
function searchKeywords(){
if ($this->RequestHandler->isAjax() ) {
$this->loadModel('Expertise_area');
Configure::write ( 'debug',0);
$this->autoRender=false;
$expertise=$this->Expertise_area->find('all',array(
'conditions'=>array('Expertise_area.autocomplete_text LIKE'=>'%'.$_GET['term'].'%'),
'fields' => array('DISTINCT (Expertise_area.autocomplete_text) AS autocomplete_text'),
'limit'=>5
));
$i=0;
if(!empty($expertise)){
$len = strlen($_GET['term']);
foreach($expertise as $valueproductname){
$pos = stripos($valueproductname['Expertise_area']['autocomplete_text'],$_GET['term']);
$keyvalue = "";
if($pos == 0) {
$keyvalue= "<strong>".substr($valueproductname['Expertise_area']['autocomplete_text'],$pos,$len)."</strong>"
.substr($valueproductname['Expertise_area']['autocomplete_text'],$len);
}else {
$keyvalue= substr($valueproductname['Expertise_area']['autocomplete_text'],0,$pos)."<strong>"
.substr($valueproductname['Expertise_area']['autocomplete_text'],$pos,$len)."</strong>"
.substr($valueproductname['Expertise_area']['autocomplete_text'],$pos+$len);
}
$response[$i]['value']=$valueproductname['Expertise_area']['autocomplete_text'];
$response[$i]['label']="<span class=\"username\">".$keyvalue."</span>";
$i++;
}
echo json_encode($response);
}else{
}
}
}
I have researched a bit and so far following solutions are worth looking at:
Query data on page load and store it in COOKIE to be used in future.
Implement some caching mechanism (memcache??). But my website is on Cakephp which does it internal cahcing if I am right. So will it be worth to go in this direction.
Use some third party indexing mechanism like Solr, Lucene etc. Don't know much about this.
Implement a much complex "Prefix Search" myself
What is the right way to go about it? Please help me out here.
I've never tried this but will be doing it soon for a project I'm working on.
I always considered the possibility of during the initial page load recieveing some AJAX (or perhaps just including it in the page) the top 10 words for each alphabet letter.. e.g.
A - apples, anoraks, alaska, angela, aha, air, arrgh, any, alpha, america
B - butter, bob etc.....
This way when user presses A-Z you can instantly provide them with 10 of the most popular keywords without any further requests, as you already have them stored in an array in the JS.
I'm not sure of size/memory usage but this could be extended further to handle the first 2 letters, e.g. AA, AB, AC.....BA, BB, BC.... ZA, ZB, ZZ... of course many combinations such as words starting with ZZ won't have any data unless it's a music site and it's ZZ Top! This means it probably won't take up so much memory or bandwidth to send this data during initial page load. Only when the user types the 3rd letter do you need to do any further data lookups/transfers.
You auto-update this data every day, week or whatever depending on site usage and the most popular searches.
I am adding a solution to my question which I figured out after a lot of research.
Problem was:
I was using Ajax to fetch keywords from database every time a user changes text in search box
I was doing a wild card search to match search item within entire strings and not just starting of keywords for ex. "dev" would return "social development", "development" etc
Solution:
I have a fixed array of keywords (200) which is not going to increase exponentially in near future. So, instead of doing complex indexing I am currently sending all keywords in an array.
I am sending this data in an array on page load since it is small. If it becomes large, I will fetch it in background via some ajax in different indexed arrays.
I am using jQuery's Autocomplete widget to do rest of thing for me.
For highlighting search item, I am using a hack by working around __renderItem. (Copied from Stackoverflow. Thanks to that!!)
Code:
function monkeyPatchAutocomplete() { //Hack to color search item
jQuery.ui.autocomplete.prototype._renderItem = function( ul, item) {
var re = new RegExp("(?![^&;]+;)(?!<[^<>]*)(" + this.term + ")(?![^<>]*>)(?![^&;]+;)", "gi");
var t = item.label.replace(re,"<span style='font-weight:bold;color:#434343;'>" +
"$&" +
"</span>");
return jQuery( "<li></li>" )
.data( "item.autocomplete", item )
.append( "<a>" + t + "</a>" )
.appendTo( ul );
};
}
function getKeywords(){
//Function that returns list of keywords. I am using an array since my data is small.
//This function can be modified to fetch data in whatever way one want.
//I intend to use indexed arrays in future if my data becomes large.
var allKeywords = <?php echo json_encode($allKeywords); ?>;
return allKeywords;
}
jQuery(document).ready(function(){
monkeyPatchAutocomplete();
var CityKeyword = jQuery('#CityKeyword');
CityKeyword.autocomplete({
minLength : 1,
source : getKeywords()
});
});
I'm looking into doing some long polling with jQuery and PHP for a message system. I'm curious to know the best/most efficient way to achieve this. I'm basing is off this Simple Long Polling Example.
If a user is sitting on the inbox page, I want to pull in any new messages. One idea that I've seen is adding a last_checked column to the message table. The PHP script would look something like this:
query to check for all null `last_checked` messages
if there are any...
while(...) {
add data to array
update `last_checked` column to current time
}
send data back
I like this idea but I'm wondering what others think of it. Is this an ideal way to approach this? Any information will be helpful!
To add, there are no set number of uses that could be on the site so I'm looking for an efficient way to do it.
Yes the way that you describe it is how the Long Polling Method is working generally.
Your sample code is a little vague, so i would like to add that you should do a sleep() for a small amount of time inside the while loop and each time compare the last_checked time (which is stored on server side) and the current time (which is what is sent from the client's side).
Something like this:
$current = isset($_GET['timestamp']) ? $_GET['timestamp'] : 0;
$last_checked = getLastCheckedTime(); //returns the last time db accessed
while( $last_checked <= $current) {
usleep(100000);
$last_checked = getLastCheckedTime();
}
$response = array();
$response['latestData'] = getLatestData() //fetches all the data you want based on time
$response['timestamp'] = $last_checked;
echo json_encode($response);
And at your client's side JS you would have this:
function longPolling(){
$.ajax({
type : 'Get',
url : 'data.php?timestamp=' + timestamp,
async : true,
cache : false,
success : function(data) {
var jsonData = eval('(' + data + ')');
//do something with the data, eg display them
timestamp = jsonData['timestamp'];
setTimeout('longPolling()', 1000);
},
error : function(XMLHttpRequest, textstatus, error) {
alert(error);
setTimeout('longPolling()', 15000);
}
});
}
Instead of adding new column as last_checked you can add as last_checked_time. So that you can get the data from last_checked_time to the current_time.
(i.e) DATA BETWEEN `last_checked_time` AND `current_time`
If you only have one user, that's fine. If you don't, you'll run into complications. You'll also run one hell of a lot of SELECT queries by doing this.
I've been firmly convinced for a while that PHP and long polling just do not work natively due to PHP not having any cross-client event-driven possibilities. This means you'll need to check your database every second/2s/5s instead of relying on events.
If you still want to do this, however, I would make your messaging system write a file [nameofuser].txt in a directory whenever the user has a message, and check for message existence using this trigger. If the file exists and is not empty, fire off the request to get the message, process, feed back and then delete the text file. This will reduce your SQL overhead, while (if you're not careful) increasing your disk IO.
Structure-wise, an associative table is by far the best. Make a new table dedicated to checking the status, with three columns: user_id message_id read_at. The usage should be obvious. Any combination not in there is unread.
Instead of creating a column named last_checked, you could create a column called: checked.
If you save all messages in the database, you could update the field in the database. Example:
User 1 sends User 2 a message.
PHP receives the message using the long-polling system and saves the message in a table.
User 2, when online, would send a signal to the server, notifying the server that User 1 is ready to receive messages
The server checks the table for all messages that are not 'checked' and returns them.
I have a simple web-based database using php/mysql that I use to keep track of products leaving my stockroom.
The MySQL database has a bunch of tables but the two I'm concerned with are 'Requests' and 'Salesperson' which you can see below (I've omitted irrelevant information).
Requests
R_ID ... R_Salesperson
1 ... James
2 ... Bob
3 ... Craig
Salesperson
S_ID S_Name
1 ... James
2 ... Bob
3 ... Craig
In my head section I have the following script that dynamically populates a list of our sales staff names as you type them:
// Autocomplete Salesperson Field
$("#form_specialist").autocomplete("../includes/get_salesperson_list.php", {
width: 260,
matchContains: true,
//mustMatch: true,
//minChars: 0,
//multiple: true,
//highlight: false,
//multipleSeparator: ",",
selectFirst: false
});
aaand get_salesperson_list.php:
<?php
require_once "get_config.php";
$q = strtolower($_GET["q"]);
if (!$q) return;
$sql = "select DISTINCT S_Name as S_Name from Salesperson where S_Name LIKE '%$q%'";
$rsd = mysql_query($sql);
while($rs = mysql_fetch_array($rsd)) {
$cname = $rs['S_Name'];
echo "$cname\n";
}
?>
I also have some basic javascript input validation requiring a value be entered in the Salesperson field (script is in the head section):
<!-- Input Validation -->
<script language="JavaScript" type="text/javascript">
<!--
function checkform ( form )
{
// ** Validate Salesperson Entry **
if (form.form_specialist.value == "") {
alert( "Please enter Salesperson Name" );
form.form_salesperson.focus();
return false ;
}
// ** END Salesperson Validation **
return true ;
}
//-->
</script>
Aaaaanyway - the problem is I can't figure out how to reject any names not in the 'Salesperson' table. For example - if I were to type 'Jaaames' although it would initially suggest 'James' if I were to ignore it and submit 'Jaaames' this would be entered into the 'Requests' table. This is relatively annoying given my undiagnosed OCD and I'd rather not have to go through hundreds of requests every so often editing them.
I'd say you're taking the wrong approach here.
The Requests table should NOT be storing the salesperson's NAME, it should be saving their ID. The Primary Key of the Sales Person table.
Then, instead of using auto-complete to populate a TEXT input, I'd recommend using the same approach to populate a SELECT menu that uses the Sales Person's ID as a value.
This accomplishes the following:
your database becomes more normalized
it removes redundant information from the Requests table
removes the need to validate the Sales Person's name on the client side
By defining the S_ID as a foreign key to the Requests table, you ensure that ONLY entries in the Sales Person table can exist in the Requests table.
You could try binding an AJAX request to either the submit of the form or on changing your text field or maybe when the field loses focus.
For this example I am using jQuery:
$('input[name=salesperson').blur(function(){
//when the text field looses focus
var n = $(this).val();
$.post('a_php_file_that_checks_db_for_names.php', {salesperson:n}, function(data){
//post the name to a php file which in turn looks that name up in the database
//and returns 1 or 0
if (data)
{
if (data==='1')
{
alert('name is in database');
}
else
{
alert('name is not in database');
}
}
else
{
alert('no answer from php file');
}
});
});
You would also need a PHP file for this to talk to, an example being:
if (isset($_POST['salesperson']))
{
//query here to check for $_POST['salesperson'] in the db,
//fill in the blanks :)
$yourquery='select name from db where name=?';
if ($yourquery)
{
//looks like there were results, so your name is in the db
echo '1';
}
else
{
echo '2';
}
}
A bit of filling in the blanks required but you get the idea.
Hope this helps you out
EDIT:
A second, more elegant solution just came to mind - if you could get the list of salespersons and make a hidden form field for each, you could read them all into a JS object and test against it whenever the form field is changed. Unfortunately I don't have the time to write you an example but it sounds like a nicer way of doing it to me.
It seems like you're just using Javascript to validate your input - this isn't good as it will never run if your user doesn't support or disables Javascript. As suggested above, a server side validation would be much easier to check against the database. However, client-side validation is also helpful to have as a sort of first line of defense against bad input, since it's generally faster. I can't think of a great way to do this, but one way could be to populate a PHP array of salespersons, convert it to a javascript array, and then check to see if the form value is in the array. It's probably faster (and substantially less code) to just use server-side validation here.
Try adding some sort of validation before you put it on your database? I mean, inside the script that puts the request into the table?
The mustMatch option isn't working for you? I see it commented out.
Also, your script is vulnerable to a SQL injection attack. I realize this is an in-house application, but you never know when crazy is going to show up and ruin your day. At the top of your get_salesperson_list.php, right after you retrieve the query from $_GET, you could add something like this:
if (!preg_match("/^\w+$/", $q)) {
// some kind of error handling here, or at least a refusal to fulfill the request:
exit;
}
UPDATE: Sorry, I meant to say "exit" instead of "return". I do see that your script wasn't in a function. I have edited the above to account for that. Thanks for pointing that out.