I'm refactoring an API where there are user profiles from a profiles table and profile images in a separate table. Currently the API queries the profile tables, and then loops through the images table for associated image data (paths etc). There is logic built in that adds a default img path when a profile image isn't set. So if we are displaying 50 profiles there are 51 queries being run.
I'm considering refactoring where the initial profile query joins the image table. I'm now left with two options.
I can loop through the results server side to build the image paths. I will have to loop through them again client side to display the results.
I can loop through the results one time client side and build the image paths there. The path logic is easy and a simple if statement.
It seems 2 would be the logical choice. But is it? I guess this is part of a bigger question of when you are building out APIs and the client side interfaces when do you move code from the server to the client to keep the API fast at risk of slowing down the browser? How do you do this dance? I'm working on another API using Node for the jquery datatables plugin where there needs to be a lot more code to marry the backend, and it's been a bit of a tug of war determining how much I should hand over to the browser. A fast API is of not much use if you are crashing your visitors browsers.
The tipping for the decision for me, would be
Am I by exposing parts of the component path, so the client can build it, exposing something I don't want to.
vs
Am I by constructing the image paths server side doing work that the client might not need, or that the client might have to redo, like chopping them up on occasion for instance.
In terms of passing more data than is needed, I'm not seeing an issue from what you've said, and the first question would be one with the most priority for me.
Sort of stretching in this scenario, but the client having to know how to compose the image path, sets a few constraints, whereas if it's all done server side the implementation details are hidden. Despite them being simple, that would be my default option
As you've said it's a tug of war. Another way to look at issues like this, is the "right" answer can depend on when you ask the question. You could go one way and them a bit later, some new requirement pops up, and now it's the wrong one....
Simple and consistent is the thing to aim for. Right as in best? 20/20 hindsight time.
When I've seen and done this in the past I've found it better not to store images (like what it sounds like you are doing) in a db. Put them in a place where the browser can link to them and pass the path from the server.
If I understand you correctly.. you are displaying some sort of profiles list where each profile has associated image... right?
Abstracting from the way you store images(db or vfs images are faster but straight files - with at least minor MRU Cache - are easier to maintain).
Solution numero uno is the right way to go.
It is just simpler and more "restful". I am a huge fan of client logic, but we should use it for a good cause(such as Soopa-UI). Same goes for db logic code vs server logic code. I dislike sql and having to maintain another layer of problems, but I do understand the difference it makes in some cases to the final result.
EDIT: Oh... you are storing just paths.
So if you are not doing some fancy one page web app then there is another problem with building paths client side. Client would have to wait for script to finish loading before his images would even start to load.
Related
If this isn't appropriate, I apologize, but I wanted to get some feedback on a question I was recently asked during a phone interview. I'm strong on front end development but not very clear on back end programming, something I am trying to remedy.
After I got off the call, I had a bit of l'esprit de l'escalier, I think...
Here's the scenario: You have a simple page where a user is presenting
with a random image and allowed to move it around the page, at the
same time that user can see other users of the same page who are also
moving around their own random images, but no one is allowed to
interact with any other user's images.
So, assuming the LAMP stack is in play and jQuery / JavaScript for your front end, describe how you would implement this and prevent these users from taking control of the objects. Assume the users are savvy enough to watch the post calls in firebug.
I was able to describe a simple interface and control. I was able to describe streaming coordinates to and from a database.
I struggled a bit to think of a good way to protect the information being retrieved while on the call.
After I was off the call, within moments, I thought about a simple method of preventing others from gaining control of this data by not exposing the actual IDs of the objects within the database from which they are called. But I'm still not certain of how to do this exactly. I imagine using a php engine to abstract the variable calls, using random Ids on the objects each user cannot interact with.
This is not something that I have ever considered when working with php / MySQL, but of course I'm thinking that I probably should, even when beating an open source CMS or something into submission.
So, my question is if someone could describe their own thoughts on this or point me to a resource to help me grok this, and how I would use AJAX / PHP to make this work? Am I on the right track?
I haven't heard if I got the job yet, but though it seems it was a primarily front end role, I think they wanted a bit more familiarity with the LAMP than I was able to demonstrate.
Thanks in advance for any help you can provide. Yes, I will be following up with this on my own, and I'm already putting together some plans to dig deeper into php and MySQL for my own edification.
I just took this up as a challenge myself, to try out new technology, and I found it a quite fun little thing to work on. The approach I took was in node.js using mongodb as storage.
Using socket.io, the manipulating was set up pretty fast. As for protecting the objects from external I relied on the session ID, which I linked to the object ID. This way, you can safely expose the ID of the object without it getting compromised.
Do note that the manipulating is limited to following the other cursors on the same page.
http://gist.github.com/ThomasHambach/5168951
So, I'm new to dynamic web design (my sites have been mostly static with some PHP), and I'm trying to learn the latest technologies in web development (which seems to be AJAX), and I was wondering, if you're transferring a lot of data, is it better to construct the page on the server and "push" it to the user, or is it better to "pull" the data needed and create the HTML around it on the clientside using JavaScript?
More specifically, I'm using CodeIgniter as my PHP framework, and jQuery for JavaScript, and if I wanted to display a table of data to the user (dynamically), would it be better to format the HTML using CodeIgniter (create the tables, add CSS classes to elements, etc..), or would it be better to just serve the raw data using JSON and then build it into a table with jQuery? My intuition says to do it clientside, as it would save bandwidth and the page would probably load quicker with the new JavaScript optimizations all these browsers have now, however, then the site would break for someone not using JavaScript...
Thanks for the help
Congratulations for moving to dynamic sites! I would say the following conditions have to be met for you to do client-side layout (it goes without saying that you should always be doing things like filtering DB queries and controlling access rights server side):
Client browser and connection capabilities are up to snuff for the vast majority of use cases
SEO and mobile/legacy browser degradation are not a big concern (much easier when you synthesize HTML server side)
Even then, doing client-side layout makes testing a lot harder. It also produces rather troublesome synchronization issues. With an AJAX site that loads partials, if part of the page screws up, you might never know, but with regular server-side composition, the entire page is reloaded on every request. It also adds additional challenges to error/timeout handling, session/cookie handling, caching, and navigation (browser back/forward).
Finally, it's a bit harder to produce perma-URLs in case someone wants to share a link with their friends or bookmark a link for themselves. I go over a workaround in my blog post here, or you can have a prominent "permalink" button that displays a dynamically rendered permalink.
Overall, especially when starting out, I would say go with the more kosher, better supported, more tutorialed, traditional approach of putting together the HTML server side. Then dip in some AJAX here and there (maybe start out with form validation or auto-completion), and then move on up.
Good luck!
It is much better to do the heavy lifting on the server side.
In CodeIgniter you create a view, looping through all the rows in the table adding in the classes or whatever else you would need. There is no reason at all to do this in Javascript.
Javascript is a sickly abused language with unfortunate syntax. Why on earth would you want to load a page, and then issue a AJAX call to load up some JSON objects to push into a table is beyond me. There is little reason to do that.
Javascript (and jQuery) is for end user enhancement. Make things slide, flash, disappear! It is not for data processing in even the mildest of loads. The end user experience would be crap because you're relying on their machine to process all the data when you have a server that is infinitely more capable and even designed for this specifically.
It depends on your target market and the goal of your site.
I strongly believe in using the client side where ever you can to offload work from the server. Obviously its important you do it correctly so it remains fast for the end user.
On sites where no-js support is important (public websites, etc), you can have fallbacks to the server. You end up doubling code in these situations but the gains are very beneficial.
For advanced web applications, you can decided if making JS a requirement is worth the trade of losing a (very) few users. For me, if I have some control over the target market, I make it a requirement and move on. It almost never makes sense to spend a ton of time to support a small percentage of potential audience. (Unless the time is spent on accessibility which is different, and VERY important regardless of how many people fit into this group on your site.)
The important thing to remember, is touch the DOM as little as possible to get the job done. This often means building up an HTML string and using a single append action to add it to the page vs looping through a large table and adding one row at a time.
It's better to do as much as possible on the server-side because 1) you don't know if the client will even have JavaScript enabled and 2) you don't know how fast the client-side processing will be. If they have a slow computer and you make them process the entire site, they're going to get pretty ticked off. JavaScript/jQuery is only supposed to be used to enhance your site, not process it.
You got the trade-off correctly. However, keep in mind that you can activate compression in the server side, which will probably make adding repetitive markup to format the table a small bandwidth cost.
Keep also in mind that writing Javascript that works in all browsers (including hand-helds) is more complicated than doing the same server side in PHP. And don't forget that the "new JavaScript optimizations" do not apply to the same extent to browsers of handheld devices.
I do a great deal of AJAX app development and I can tell you this from my experience. a good balance between the two is key.
do the raw data server-side but use javascript to make any modifications that you would need to it. such as paging, column sorting, row striping, etc.
I absolutely love doing everything in AJAX heh.. but there are some short falls to doing it using AJAX, and that's SEO. search engines do not read javascript, so for the sake of your website's page rank, I would say have all data served up server side and then formatted and made look cool client-side.
The reason I love AJAX so much is because it drastically speeds up your APP usage by the user as it only loads the data you need to load where you need to load it, versus load the entire page every time you do something... you can do a whole bunch of stuff, such as hide/show rows/columns (we are talking about table manipulation here because you mentioned a table) and even with these show/hide actions add delete actions where when you click a delete row or button it deletes that row not only visually but in the database all done via AJAX calls to server-side code.
in short.
raw data: server-side sending to the client the raw data in html layout (tables for table structured data, however I do everything else in divs and other flexible html tags, only do tables for column/row style data)
data formatting: client-side which also includes any means of interacting with the data. adding to it, deleting from it, sorting it differently etc. This achieves two things. SEO, and User Experience (UX).
I am working on a task to enable image uploading and auto-scaling(from full sized to thumbnail) by jQuery & PHP.
I can naturally come up with two approaches :
First, store both images as binary objects directly into MySQL;
Second, store only urls to the images and keep the images somewhere on server.
The images are for everyone to view, so there are no security restrictions, as far as I know.
Personally I don't have any preference, however, at the end of the day, it is the business people that are going to manage the images as part of the system(CRUD). So I am wondering
which seems to be a bit better for them?
Of course I am building a easy-to-use, visualize web interface for the staff to control the process, but I am not sure if that is enough. Lessons told me that if I don't think for the future and seek the most flexible approach, the I will probably screw myself sooner or later.
PS. The following link is what I've found so far, which is pretty cool, no flash involved :)
Andrew Valum's ajax image upload jQuery plugin
Oh, how managers do like the "I know this too, I wanna play with it" thingies.
Store the images on a server. This way they can view/put/copy/modify the images the way they are used to: using Windows Explorer. They already know how to do it, and you won't have to write a lot of custom code afterwards to "I want to be able to X images ...".
Like Konerak said, Store the images on a server.
But not in the database but just as a files.
And their names can be stored in the database, if needed.
That's plain, simple and natural way
First, store both images as binary
objects directly into MySQL;
Noooo... no.. It's only going to take a month of PHBs uploading 10mp images for you to start crying uncle.
Second, store only urls to the images
and keep the images somewhere on
server.
Yes, a thousand times yes. Store the original file in one place and the latest edit in another.
IMO both are bad. Images in the DB cause connection pooling troubles, outside causes consistency nightmares.
For a first version I would just stick everything in the DB, this is good enough if you don't have too many users. If it's a success you could consider an integrated solution which handles both types of data, like JCR, for PHP you have jackalope. A bit complicated though, I wish there were better solutions.
JCR has WebDav bindings, so your managers can navigate the whole contents tree in explorer if they want. Not that I think that is a good idea though. One solution would be to let them play through WebDav, and always roll back the tx at the end :)
If I have a PHP application which allows users to make changes to documents, what is the best way to implement revision tracking for each document? I want the storage of each revision to be deltified (i.e. only save the changes that were made) like svn and other SCMs do with code. I know on a very simple level how it works, but when I start to think about implementing it, I get a little confused.
First and foremost, I am wondering if there is a library out there that can help me with this, so I don't have to completely roll my own.
And I am wondering: should I keep the full text of only the original document, and then only save the changes, or should I keep the full text of the latest document, and each time it is modified, save the differences as one of the older revisions?
If the former, then when I want to grab a page to be shown on the site, do I have to start at the beginning, and then recursively update the data based on the revisions, until I reach the current version? Won't this be painfully slow once there are many revisions?
How can I do diff/patch type operations in PHP to make the deltifying and reconstructing of the pages easier?
Would it be worth it to have locks on the pages when they're editing them? Or let pages get into 'states of conflict' and have conflict resolution operations -- let two users modify the same page simultaneously if they're modifying different parts, etc -- I'm going crazy thinking about how hard this will be. Ahh!
This previous SO question might help.
Why don't you use a subversion server? You can access the client from the console using exec() or similar. It is really not worth implementing something like that from scratch unless this you are writing a revisioning software.
This is a general programming question.
What is the best way to make a light blogging system that can handle images, bbcode-ish styling and text without a database back end? Light means not more than 50 to 100 posts in extreme cases.
What language(s) should be used? Is there any preferred data format for the information? How does security play out?
EDIT: Client has no database, is on a shared server. Can't change that. Therefore, no DB.
EDIT2:
Someone mentioned SQL Compact - does that require anything more than copying files to the server? The key here is again that things shouldn't require any more permissions than FTP Acess.
If you're looking to do it yourself; store each post as a file in a directory. Then to sort and limit the posts you rely partially on the file names to order and limit them, and potentially (in the case of a search) on reading every last file. Don't go letting users make 10,000 posts though. But yeah, the above is considered a flat file data format. You can get fancy by using a standard format like JSON, Yaml, or XML within each post file, and even fancier by requesting these with Ajax calls in mostly client side code.
Now if the reason you want to work with flat files is that you just don't want to install a database server, there's nothing stopping you from reading a local (to the server) file as a berkley DB, a Lucene Index, or an SQLite DB from within your webapp using the appropriate client library. You'll find any of these approaches a little more sane (a bit faster, a bit more readable in code) than the afore-mentioned with all the same requirements for installing on the server (read-write file permissions). Many web frameworks or languages (like php) come with the option of an API to these client libraries; SQLite, and Lucy (C Lucene) particularly.
If you're just looking for examples of it being done, I first (I think 1999 or 2000) came across blosxom which is a perl script that either runs as a cgi script per request or as a cron job. It builds a dated index of "posts" based on whatever you throw into the directory it's meant to scan. It also builds an RSS feed.
Jekyll or Blogofile are my favorite kind of solution for that, "compiling pages before upload".
I'm going to go out on a limb here and say that it's not always the destination, but the Journey.
If you're going to set out to do this, I recommend using a language you are comfortable. Personally, this would be C#/.net for me, but from your tagging, I'll assume PHP would be the Serverside scripting language you would choose.
I would layout how I wanted my application to behave. If there is going to be a lot of data, you should consider (as dlamblin mentioned) an DB of some sort for lookup and retrieval. (Light Blog, not so much data... 1000 users can edit? maybe you should consider a DB.) Once you've decided how to store the data, decide how to present it.
Write some proof of concept code for each of the features you want to implement (blog templating, bbcode, user authentication, text searching...) and start to work them all together.
search for flat-file cms-es on google, for example:
http://www.flatcms.org/
this has been already done, so there is no need to create such CMS again. there are plenty of them.
I concur with dusoft that this has already been done.
DotNetBlogEngine.net is an ASP.NET (C#) based blogging system that has a nice XML back-end as an option.
Doesn't answer your question directly but check Unify.
If you do not want to write a new one or want to get some inspiration:
Flatpress
Simple PHP Blog
Ninja Designs are working on a db-free wordpress clone
You could either use XML, or use SQL compact (which allows for handling things just like SQL Server, but instead of a database you utilize flat files).