Php Rest API path hierarchy w/o id - php

I am reading a book and searching on the internet about this API path hirearchy and have not found anything solid yet, what I really want to know is where to put the id to retrieve/update/delete hirearchical API methods.
For instance I know I can do:
authority/resource/[id]/catalog/category1/category2
also:
authority/resource/catalog/category1/category2/[id]
in this previous example the problem comes when the next path from category2 (id) can be a numeric field to lets say update a value.
I do not really know if there is a standard about this way of building an state transfer API.
I can actually build my own and I was wondering if there are any standards or some aproach.

The "standard" allows lot's of interpretations on how you can design your hierarchy. There is not really THE way to do it.
However I think that this presentation:
https://blog.apigee.com/detail/restful_api_design
Is a good read on the topic. It outlines some design choices and also shows how some popular APIs (such as the ones offered by Google or Twitter) choose to design their URLs.

Related

Create and Search user posts in PHP

This is my first entry into backend development, taking mostly a design role in past projects. Im working on a personal project and have fleshed out what I feel is a logic of sorts. Just a few very basic user tasks broken down.
Core User Actions:
A user can create a new posting
A user can find posts from other users by tag, date created, and
other content. This is done in kinda a central search area. (Search
string: "Dog Saddle", retrieves posts with mentions of dog saddels,
dogs, and saddels)
A post's creator can be contacted from the post.
A user can delete their created posts.
I need guidance/suggestions with the following:
What data should I capture for users?
What framework is best for the application dynamics I've described? (ROR, Python, PHP ect. I'm a one man team currently)
Are there open source projects I may gain reference from?
I'm very dedicated to learning on my own, and can make use of good advice!
Thanks,
Given the rather generic requirements you've outlined and that you are just entering into the development arena, you should try an established framework. That way you won't need to write everything from scratch. You'll still have plenty of control, but the benefits of many commonly used functions, classes etc that will speed up your development.
Give something like CodeIgniter for PHP a try. See http://codeigniter.com/. There are a lot of tutorials online to help you get started. For example, see http://net.tutsplus.com/sessions/codeigniter-from-scratch/.
What you've described doesn't suggest a particular language at all. ROR, Python, PHP--you could use any of those to create what you've described. PHP is considered by many to have a less steep learning curve than ROR and Python, which is why I recommended it. However, there are frameworks for these other languages as well that will give you the same benefits as code igniter.

Theory/Magic behind creating API

Im a newbie PHP programmer and I have a few questions about creating a REST based API service.
Basically, its going to be a github open source project which is going to scrape various data off the web and offer it as an API in XML. Now here are my questions in terms of how should I or how can I do this.
1) Since there isnt a robust/same pattern for getting various data through scraping, what is the best way to actually output the xml?
I mean the PHP file would have various lines of extracting data from various points in the code and the file would be a lot of lines. Is it a good idea to type the code to output the result in there?
2) Is there a way to organize the scraping code in a sort of class?
I cant think of a way that would work besides linear approach where not even a function is created and you just apply functions (in general).
3) If theres a way to do that ^^ , how can it you output it?
Is there any other approach besides using another file and getting the contents from the main file and displaying the code through the secondary file.
4) If I were to offer the API in XML and JSON, is there a way to port from one result to another or will I have to manually create the fields in json or xml and place the content in there?
I might have more questions that might arise after these have been answered but I hope I get everything cleared up. Also, this is assuming that the results are not fetched from a DB so the data has to be scraped/tabulated on every request. (even though caching will be implemented later)
Thanks
This question is probably more appropriate on https://codereview.stackexchange.com/
Not to be rude, but a newbie programmer developing an API is like a first-year med student offering to do open-heart transplants for free. I understand that you believe that you can program, but if you intend to release publicly accessible code, you probably need more experience. Otherwise guys like me will muck through it and file bug reports ridiculing your code.
That said, if you want theory of good API design you should probably check out Head First Object Oriented Analysis and Design. You'll want to focus on these key concepts
Program to an Interface, not an Implementation
Encapsulate what varies
...and follow other good design principles.
...honestly, there's a lot to cover to good interface and good systems design. You can use this as a learning exercise, but let people know they shouldn't rely on your code. Though they should know that screen scraping is far more brittle and instable than web service API requests anyway, but many don't.
That said, to provide some initial guidance:
Yes, use OOP. Encapsulate the part that actually does the scraping (presumably using cURL) in a class. This will allow you to switch scraping engines transparently to the end user. Encapsulate your outputs in classes, which will allow for easy extension (i.e. if JSON output is in a Single Responsibility class and XML Output is in another, I can add RSS Output easily by making a new class without breaking your old code)
Think about the contracts your code must live up to. That will drive the interface. If you are scraping a particular type of data (say, sports scores for a given day), those should drive the types of operations available (i.e. function getSportsScoresForDate(date toGet))
Start with your most abstract/general operations at a top level interface, then use other interfaces that extend that interface. This allows users to have interfaces at different levels of granularity (i.e. class SensorDataInterface has a method getData(). HeartRateMonitorInterface extends SensorDataInterface and adds getDataForTimeInterval())

CMS architecture - which way to go? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
For the past weeks I can't stop thinking about the architecture of a CMS I have to develop shortly. First of all I want to answer to the possible first questions that you could ask me.
Q: Have you read the similar questions on StackOverflow?
A: Yes, I have. Unfortunately none applies to what I have to ask.
Q: Why would you want to code another CMS???
A: Well, this is a bit of a long answer. Long story short: it has to be closed-source and it has to meet the requirements I need (it won't be 100% a CMS, it's a much more delicate subject - any project developed with it will be somewhere between 60-70% CMS and the rest will be custom code for that project's specific needs)
Tools of the trade:
PHP
Zend Framework (my personal choice; I'm very familiar with it and I won't change it for this task whatsoever)
What I need this "CMS" to be
Developer oriented
Since it won't be a pure 100% CMS it has to be developer oriented - easy to maintain, easy to develop against (more like the feeling when developing on an enterprise level framework)
User/Editor oriented
While working on a project like this one, any programmer might find himself going in a wrong way and not thinking the person who'll work as an editor using this CMS is not in fact a very technical person. This is the part where conflicts happen - if it's not simple to use, yet powerful enough, you will have a lot of problems to deal with.
i18n & l10
I'm almost certain it will be kind of a difficult task to code something both developer and user oriented and if I won't achieve this, I would like to give more importance to the developer instead of the editor. I am aware it's not a good deal to ignore the actual user of the software, but this CMS has to be fast to develop against.
Possible architecture patterns
1. General object
The first architectural design that got me thinking was the following:
I define a general object. The user/admin goes in the administration area and defines the data object he needs. Easy one.
The object page (for example) has a title field, a body field and a slug. It was just defined by the user and now he can create content based on this "data structure". Seems pretty neat, but I still didn't solve some of this architecture's problems.
How will those dynamic objects will be stored in the database? Should I use a dataTypes table, an objects table and link them many to many via objectProperties table?
Or maybe should I serialize them and store everything in the objects table?
Should I give the user the possibility to create new dataType properties and add them to its objects?
How will I i18n all of this?
Isn't too complicated to force the user to define its own data structures?
Will it be too messy if it will have a lot of data structures defined, multiple languages? Would it be still manageable?
2. Extreme developer oriented - scaffold data structures
Today I found myself thinking about this possibility. It would be pretty neat to define the data structure of an object in a yaml or ini file and then scaffold it's database table, model and CRUD. Also mention it's relations to other "data structure" objects to get the right CRUD implementation (think about page data structure and category data structure, for example).
Unfortunately this made me also think about several possible problems.
Will I be able to code such a scaffolding tool on top of Zend Framework? (which is known to be lacking scaffolding if we except the 2 or 3 suggestions made by the community, like this one)
Is too stupid to create 2 tables for each object so I can internationalize it?
Would it be too strict on the user if he can't define new data structures without the programmer?
Conclusion
I'm still very confused on how to approach this subject and what architecture to choose. At this very moment both are pretty challenging in terms of development.
My questions are pretty straight-forward.
If you would have to choose one of this approaches, which would it be and why?
Can you maybe suggest a more interesting pattern to follow, considering my needs?
Just some general advice, if you are really trying to manage free-form content then I would stay away from relational databases for storing your data and go with an XML solution. A relational database has too much structure for something that is purely content oriented. Think of a home page... You have info displayed like: welcome notice, about us, who we are. That doesn't really map well to a table / collection of tables especially when you start adding / removing some of those items. Something with a defined structure, like stack overflow, does map well to a relation datbase however.
Take a look at Day CMS, Apache Sling, Java Content Repository for some ideas.
http://www.day.com/day/en.html
http://sling.apache.org/site/index.html
http://en.wikipedia.org/wiki/Content_repository_API_for_Java
From my point of view more options are always a problem. Users are mostly ignorant, when it comes to complex systems. Therefore I'd stick with the developer-oriented solution. Developer will decide what kind of content can be displayed. Optionally I would allow some kind of "open" content - for power users, allowing complex CSS/HTML/JS. Complex content like photo galleries, user profiles, etc. should not be designed by BFUs.
So to sum up - main feature - creating pages that can be dropped anywhere in the structure (that should be very flexible). If they want user profiles, you can create a new type of page. But at the end of the day, BFUs can do anything with enough time. It depends on the price/time scale. If they can pay for it and need it fast, they will make you create a new user profile page type, taht will be easy to fill. If they are kinda poor, they'll choose to setup all by themselves using normal page and WYSIWYG :D

How to Build Basic "To-Do" list style Website

I am trying to create a website that will allow me to list all of the different types of beers I have tried including name, type, location, and brief tasting notes. I have a basic login created and believe that I will have to store the information about the beer in a database as well (with a cell for each of the elements). I was wondering a) if this is how people would suggest going about doing this and b) if anyone knows of good tutorials on how to set this up. I plan on using mySQL and PHP for the database and jQuery for the visual side of things. I am relatively new at this, so I am having trouble figuring out what exactly to Google to find what I am looking for.
I plan on going about it similar to a to-do list (only each element would have multiple attributes — name, type, etc.). Any help/suggestions/direction would be awesome! Thanks!
First off you need to decide on the features you want to implement, and then work out which to do first.
For example,
you need a database, which has a table for your beer info. (but do you need another one for people to have a user account too?)
you need to create a set of functions that you can access from the web site.
list beers
add beer
etc.
How do you want the front end to work?
How do you want the front end to look?
Once you know exactly what you want to do, it's much easier to break down the tasks into jobs you need the application to do.
I'd also suggest you look at Ruby on Rails (especially + the Hobo addon) to get you up and running faster (instead of PHP) - if you are set on PHP, have a look at CakePHP or another similar framework, so that you don't end up re-inventing the wheel.
Update:
Once you get started, further more detailed problems will be faced, many you can get a quick answer from google or the documentation for the language / database etc. If something is extra tricky, post another question on StackOverflow.
As it is your question is too general for a more specific answer, but if you need any additional info, just yell.

PHP find relevance

Say I have a collection of 100,000 articles across 10 different topics. I don't know which articles actually belong to which topic but I have the entire news article (can analyze them for keywords). I would like to group these articles according to their topics. Any idea how I would do that? Any engine (sphinx, lucene) is ok.
In term of machine learning/data mining, we called these kind of problems as the classification problem. The easiest approach is to use past data for future prediction, i.e. statistical oriented:
http://en.wikipedia.org/wiki/Statistical_classification, in which you can start by using the Naive Bayes classifier (commonly used in spam detection)
I would suggest you to read this book (Although written for Python): Programming Collective Intelligence (http://www.amazon.com/Programming-Collective-Intelligence-Building-Applications/dp/0596529325), they have a good example.
Well an apache project providing maschine learning libraries is Mahout. Its features include the possibility of:
[...] Clustering takes e.g. text documents and groups them into groups of topically related documents. Classification learns from exisiting categorized documents what documents of a specific category look like and is able to assign unlabelled documents to the (hopefully) correct category. [...]
You can find Mahout under http://mahout.apache.org/
Although I have never used Mahout, just considered it ;-), it always seemd to require a decent amount of theoretical knowledge. So if you plan to spend some time on the issue, Mahout would probably be a good starting point, especially since its well documented. But don't expect it to be easy ;-)
Dirt simple way to create a classifier:
Hand read and bucket N example documents from the 100K into each one of your 10 topics. Generally, the more example documents the better.
Create a Lucene/Sphinx index with 10 documents corresponding to each topic. Each document will contain all of the example documents for that topic concatenated together.
To classify a document, submit that document as a query by making every word an OR term. You'll almost always get all 10 results back. Lucene/Sphinx will assign a score to each result, which you can interpret as the document's "similarity" to each topic.
Might not be super-accurate, but it's easy if you don't want to go through the trouble of training a real Naive Bayes classifier. If you want to go that route you can Google for WEKA or MALLET, two good machine learning libraries.
Excerpt from Chapter 7 of "Algorithms of the Intelligent Web" (Manning 2009):
"In other words, we’ll discuss the adoption of our algorithms in the context of a hypothetical
web application. In particular, our example refers to a news portal, which is inspired by the Google News website."
So, the content of Chapter 7 from that book should provide you with code for, and an understanding of, the problem that you are trying to solve.
you could use sphinix to search for all the articles for all the 10 different topics and then set a threshold as to the number of matches that what make an article linked to a particular topic, and so on
I recommend the book "Algorithms of the Intelligent Web" by Haralambos Marmanis and Dmitry Babenko. There's a chapter on how to do this.
I don't it is possible to completely automate this, but you could do most of it. The problem is where would the topics come from?
Extract a list of the most non-common words and phrases from each article and use those as tags.
Then I would make a list of Topics and assign words and phrases which would fall within that topic and then match that to the tags. The problem is that you might get more than one topic per article.
Perhaps the best way would be to use some form of Bayesian classifiers to determine which topic best describes the article. It will require that you train the system initially.
This sort of technique is used on determining if an email is SPAM or not.
This article might be of some help

Categories