Optimize AJAX website to use least resources - php

I have a webpage that uses AJAX, MySQL, and a simple PHP file to grab content for the body of my site and place it in the body of the page. Basically, the entire site is one, dynamic page that utilizes jQuery and the history plugin to keep all the links bookmarkable and Back/Forward capable.
I want to optimize my site to use the least amount of resources possible (server-side). Right now, anytime someone clicks a link to another "page" on my site, the PHP page is called and a database connection is created, the content is grabbed from the database, and then placed on the page with JavaScript.
Would it be better to instead have the PHP file grab a cached file that contains the content and then send that to the browser?
I still want my pages to be as up to date as possible, though, so I was thinking instead to have a column in the table with my content that says its modification date, and, if the cached file is older, load the data in the table and replace the cached file. However, that would make the PHP script both create a database connection AND check the file modification time of the cached file.
What's the best solution?

When you update the data in the database also remove any cached version of the corresponding data.
This way you can have the php file check if a cached version of the file exists. If there isn't one then connect to the db and cache that data and return it else just return the cached version. This way you only establish a database connection if no cached version exists.

Related

Dynamically loading blog posts from database

I'm sure I'm doing this wrong, and that there's a better way of doing this.
I just finished making a food blog website. Posts are uploaded into a database. The HTML files are generated using the information from the SELECT statement, when they are needed.
Basically, I didn't think it was smart to create a new HTML file and store pages that way, but rather create them on the fly.
This is my (terrible) method:
User clicks on link or types URL
URL is structured like so: www.website.com/name-of-post
There is no HTML file called name-of-post.php so it redirects with a 404 error.
On that page I check to see if 'name of post' exists in the database.
If it does, generate the file. Else, show 404 error page.
This method also means that the pages aren't indexed by google, so people will not be able to search a recipe that way.
I know this can't be right. So is there a better way of doing this?
Well, you just discovered the reason PHP was invented 20 years ago. It's a templating engine that allows you to dynamically generate web pages.
Storing content in your database is generally fine. Most content is just data. However, building a web page using that content is an entirely different matter. There's neither a need to generate an actual .html file on your filesystem nor a need to store an .html file (in its entirety) in your database to do so.
The concept of templating in PHP is that you create the format separate from the content, such that the format can always be changed, and the content can always be plugged in regardless.
For example, here's a very simple template.
<h1>Hello <?=$name?></h1>
<p>Today is <?=date('l F jS, Y')?>.</p>
The format of this page will always be the same, regardless of the content, which are the variables that we could easily plug into the template from our database. Your problem is no different. You can store templates, written as simple HTML and PHP code that just plug-in the content with some variables (likely populated by the data in your database).
The concept of dynamically creating URLs is relatively similar. You can tell your web server to redirect all requests to a main PHP script (typically referred to as a front-controller) like your index.php, and have that script check the database for the request URI (using something like $_SERVER['REQUEST_URI'], for example) and based on that information you can pull in the needed content from your database, use the template to generate the output, and out comes the page for the client.

PHP - Edit Request Being Made by Other Request to the Server?

everyone.
I'm working with cURL to download some page from a website. After the webpage loads, it makes a few other requests to the server to get some files from the server to load into the HTML (different images I want to compare that are randomized - they're a few decorating images that are randomizes, if that matters - a few are displayed, selected randomly from a set of images).
I want to load the images for the comparisons via cURL, but only after the page finished downloading.
Right now, in the webpage I get from the request I made, I get <img>s, but their srcs are wrong, because they're stored locally on the server I'm fetching the webpage from.
Because of that, the page on the original page on the website is able to get the picture (since it's running on that server), yet I'm unable to get them successfully.
I already have the page in as a string variable. I know the address of the page and the URL/paths of the images, but I don't want to make another request to the server, because then the images would be randomized, and I want to do the comparisons between then after the page loads.
How can I do that? Is it possible to make a a request? Is it possible to somehow change my existing request of the page, or modify anything else that would bring me the images?
I know how to make requests, but I've never dealt with this kind of a "depth" of requests.
Thanks a lot in advance!

Dynamic web-page interface with persistent client changes

I am trying to create a web page with a tab menu. I want to be able to dynamically add and delete tabs (and other content). There is a perfect example of what I want here: http://www.dhtmlgoodies.com/index.html?whichScript=tab-view . I want the newly created tabs to be persistent through page loads. So basically if I add a tab and refresh I want the tab to still be there. If I close the browser and reload the page a month later I would like the tab and any content to still be there. This page is for personal use and will be hosted on my computer and accessed through the browser alone, not any kind of web server. Although I'm not against using a web server if I need to.
Looking at the code it seems that the 'add tab' functions just add HTML to the page in memory but I need it to permanently change the HTML of the page. Is there a way to write dynamic changes to the DOM back to disk? I'm not quite sure where to go with this and searching for a week has left me with too many language and implementation options to look into. I am not an experienced web developer and there is so many different ways to create web pages and so many new terms that I'm a little overloaded now.
I do realize that this is a little outside the realm of a typical web-site. It is generally not a good idea to let the client-side make changes to data on the server-side. But since I am the only person who will be using this and it will not be accessible from the internet security is not an issue.
I'm not apposed to any particular scripting language, but I would like to keep it as simple as possible. I.e.: one HTML page, one CSS, and maybe a script file. Whatever is necessary. I am not apposed to reading and learning on my own either so being pointed down the right path is fine for me.
If you need a rock solid method, then you would need some record of having those tabs existing. That means having a database that knows that the tab exists, which tab it was, and what content it contained. Html5's local browser storage (not to be confused with cookies though) could also be a viable solution but browser compatibility is an issue (for now).
You also need some sort of "user accounts system" so you know who among your users had this set of tabs open. Otherwise, if you had a single "tabs list" for everyone, everyone would open the same tabs!
For dynamic html and js for the "tab adding", you are on the right spot. You need PHP to interact with the database that is MySQL. What PHP does it it recieves data in the server from the browser about what happened like:
know which user is logged in
what action did he choose (add or remove tab)
add to the database or delete a record
reply with a success or error, whichever happened
For MySQL, you need to create a database with a table for your "tab list". This list must have:
User id (to know which user did what among the ones in the list)
Tab id (know which tab is which among the ones in the list)
Tab content (it may be a link for an iframe, actual html, text etc.)
Friend, when you talk of closing the browser and not losing the data, then you are talking about data persistence or data durability. In other words, you have to save your data somewhere, and load it next time.
For storage you can use a flat file (a simple text file), a database, an XML file, etc. However, you need to learn a lot to save the information and content of the new tab somewhere, and next time load it.

PHP and Javascript / Ajax caching for load speed - JSON and SimpleXML

I have a site that get content from other sites with some JSON and XML API. To prevent loading problems and problems with limitations I do the following:
PHP - Show the cached content with PHP, if any.
PHP - If never cached content, show an empty error page and return 404. (The second time the page loads it will be fine "success 200")
Ajax - If a date field does not exist in the database, or current date is earlier than the stored date, load/add content from API. Add a future date to the database. (This makes the page load fast and the Ajax caches the content AFTER the page is loaded).
I use Ajax just to run the PHP-file. I get the content with PHP.
Questions
Because I cache the content AFTER it was loaded the user will see the old content. Which is the best way to show the NEW content to the user. I'm thinking automatically with Javascript reload the page or message-nag. Other prefered ways?
If I use very many API:s the Ajax loadtime will be long and it's a bigger risk that some error will accur. Is there a clever way of splitting the load?
The second question is the important one.
Because I cache the content AFTER it
was loaded the user will see the old
content. Which is the best way to show
the NEW content to the user. I'm
thinking automatically with Javascript
reload the page or message-nag. Other
prefered ways?
I don't think you should reload the page via javascript, but just use Jquery's .load(). This way new content is inserted in the DOM without reloading the entire page. Maybe you highlight the newly inserted content be adding some CSS via addClass().
If I use very many API:s the Ajax
loadtime will be long and it's a
bigger risk that some error will
accur. Is there a clever way of
splitting the load?
You should not be splitting content in first place. You should try to minimize number of HTTP requests. If possible you should be doing all the API calling offline using some sort of message queue like for example beanstalkd, redis. Also cache the data inside in-memory database like for example redis. You can have a free instance of redis available thanks to http://redistogo.com. To connect to redistogo you should probably use predis
Why not use the following structure:
AJAX load content.php
And in content.php
check if content is loaded. yes > check if date is new. yes > return content
there is content, but its older > reload content from external > return content
there is no content > reload content from external > return content.
And for your second question. It depends on how often the content of the api's needs to be refreshed. If its daily you could run a script at night (or when there are the littlest people active) to get all new content and then during the day present that content. This way you minimize the calls to external resources during peak hours.
If you have access to multiple servers, the clever way is splitting the load. have each server handle a part of the requests.

Removing Uploaded Files from Google when item Expires

We're using the Google CSE (Custom Search Engine) paid service to index content on our website. The site is built of mostly PHP pages that are assembled with include files, but there are some dynamic pages that pull info from a database into a single page template (new releases for example). The issue we have is I can set an expire date on the content in the database so say "id=2" will bring up a "This content is expired" notice. However, if ID 2 had an uploaded PDF attached to it, the PDF file remains in the search index.
I know I could write a cleanup script and have cron run it that looks at the db, finds expired content, checks to see if any uploaded files were attached and either renames or removes them, but there has to be a better solution (I hope).
Please let me know if you have encountered this in the past, and what you suggest.
Thanks,
D.
There's unfortunately no way to give you a straight answer at this time: we have no knowledge of how your PDFs are "attached" to your pages or how your DB is structured.
The best solution would be to create a robots.txt file that blocks the URLs for the particular PDF files that you want to remove. Google will drop them from the index on its next pass (usually in about an hour).
http://www.robotstxt.org/
What we ended up doing was tying a check script to the upload script that once it completed the current upload, old files were "unlinked" and the DB records were deleted.
For us, this works because it's kind of an "add one/remove one" situation where we want a set number of of items to appear in a rolling order.

Categories