Rest API Field Requirements - php

I'm curious to see if there is a solution to send Field Requirements (type, length, required) together with the API, which I can use for Form Validation.
So... What I expect, is the following:
Page loads
Gets required fields and their requirements from API.
Builds form based on requirements

Just add more properties to the response body to specify the requirements. For example:
{
"fields": [{
"field": "username",
"type": "String",
"minLength": 3,
"maxLength": 20,
"required": true
}, {
"field": "password",
"type": "String",
"minLength": 6,
"maxLength": 15,
"required": true
}]
}

Related

Wordpress REST API: filter by custom taxonomy

I have a custom post type called "products" and it has a taxonomy called "domain".
I am using the WP Rest API and AngularJS 1.5.1. I am using a service to get the product posts. This part works fine.
getProducts: function () {
return $http.get('URL/wp-json/wp/v2/products').then(function (result) {
return result.data;
});
}
This returns an array of products, each being (partial):
{
"id": 29,
"date": "2017-10-09T16:21:56",
"date_gmt": "2017-10-09T16:21:56",
"guid": {
"rendered": "URL/?post_type=product&p=29"
},
"modified": "2017-10-09T19:58:32",
"modified_gmt": "2017-10-09T19:58:32",
"slug": "product-name",
"status": "publish",
"type": "product",
"link": "URL/product/product-name/",
"title": {
"rendered": "product name"
},
"content": {
"rendered": "some content",
"protected": false
},
"featured_media": 30,
"template": "",
"domain": [
2
],
...
}
As you can see, my taxonomy term "domain" is represented by a number, "2" in this case.
However, Using Postman, if I do a GET as described here:
URL/wp-json/wp/v2/products?filter[domain]=2
I still get all my products back, not just the ones with domain=2 as I expected.
What am I missing here?
It seems WP removed the filter parameter in v4.7. You can get a plugin to add the feature back in here.
I just merged that function into my functions.php and it works like this:
/wp-json/wp/v2/products?filter[taxonomy_name]&filter[term]=taxonomy-slug

JSON Schema Requirement Enforcement

So this is my first time using JSON Schema and I have a fairly basic question about requirements.
My top level schema is as follows:
schema.json:
{
"id": "http://localhost/srv/schemas/schema.json",
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"event": { "$ref": "events_schema.json#" },
"building": { "$ref": "buildings_schema.json#" }
},
"required": [ "event" ],
"additionalProperties": false
}
I have two other schema definition files (events_schema.json and buildings_schema.json) that have object field definitions in them. The one of particular interest is buildings_schema.json.
buildings_schema.json:
{
"id": "http://localhost/srv/schemas/buildings_schema.json",
"$schema": "http://json-schema.org/draft-04/schema#",
"description": "buildings table validation definition",
"type": "object",
"properties": {
"BuildingID": {
"type": "integer",
"minimum": 1
},
"BuildingDescription": {
"type": "string",
"maxLength": 255
}
},
"required": [ "BuildingID" ],
"additionalProperties": false
}
I am using this file to test my validation:
test.json:
{
"event": {
"EventID": 1,
"EventDescription": "Some description",
"EventTitle": "Test title",
"EventStatus": 2,
"EventPriority": 1,
"Date": "2007-05-05 12:13:45"
},
"building": {
"BuildingID": 1,
}
}
Which passes validation fine. But when I use the following:
test2.json
{
"event": {
"EventID": 1,
"EventDescription": "Some description",
"EventTitle": "Test title",
"EventStatus": 2,
"EventPriority": 1,
"Date": "2007-05-05 12:13:45"
}
}
I get the error: [building] the property BuildingID is required
Inside my buildings_schema.json file I have the line "required": [ "BuildingID" ] which is what causes the error. It appears that the schema.json is traversing down the property definitions and enforcing all the requirements. This is counter intuitive and I would like it to ONLY enforce a requirement if it's parent property is enforced.
I have a few ways around this that involve arrays and fundamentally changing the structure of the JSON, but that kind of defeats the purpose of my attempts at validating existing JSON. I have read over the documentation (/sigh) and have not found anything relating to this issue. Is there a some simple requirement inheritance setting I am missing?
I am using the Json-Schema for PHP implementation from here: https://github.com/justinrainbow/json-schema
After messing with different validators, it appears to be an issue with the validator. The validator assumes required inheritance through references. I fixed this by simply breaking apart the main schema into subschemas and only using the required subschema when necessary.

Elasticsearch - use EdgeNGram analyzer for case insensitive search

I want to make case insensitive search on fields with EdgeNGram analyzer. I am using ES in php via elastica.
I have table of users
{
"user": {
"analyzer": "analyzer_edgeNGram",
"properties": {
"admin": {
"type": "boolean"
},
"firstName": {
"type": "string",
"analyzer": "analyzer_edgeNGram"
},
"lastName": {
"type": "string",
"analyzer": "analyzer_edgeNGram"
},
"username": {
"type": "string",
"analyzer": "analyzer_edgeNGram"
}
}
}
}
My analyzers look like this (you can see there is lowercase filter in egdeNGram analyzer)
"index.analysis.filter.asciifolding.type": "asciifolding",
"index.number_of_replicas": "1",
"index.analysis.filter.standard.type": "standard",
"index.analysis.tokenizer.edgeNGram.token_chars.1": "digit",
"index.analysis.tokenizer.edgeNGram.max_gram": "10",
"index.analysis.analyzer.analyzer_edgeNGram.type": "custom",
"index.analysis.tokenizer.edgeNGram.token_chars.0": "letter",
"index.analysis.filter.lowercase.type": "lowercase",
"index.analysis.tokenizer.edgeNGram.side": "front",
"index.analysis.tokenizer.edgeNGram.type": "edgeNGram",
"index.analysis.tokenizer.edgeNGram.min_gram": "1",
"index.analysis.tokenizer.standard.type": "standard",
"index.analysis.analyzer.analyzer_edgeNGram.filters": "standard,lowercase,asciifolding",
"index.analysis.analyzer.analyzer_edgeNGram.tokenizer": "edgeNGram",
"index.number_of_shards": "1",
"index.version.created": "900299"
There is for example user with firstName Miroslav. If I do query like this
{"query": {"match": {"firstName": "miro"}}}
I have 0 hits. But if I changed in query miro to Miro it will find.
I've checked how are the tokens generated and they are case sensitive: M, Mi, Mir, ...
Any advice how to achieve case insensitive searching?
Thank you
The default search_analyzer set is standard and has the following settings:
"analyzer": {
"rebuilt_standard": {
"tokenizer": "standard",
"filter": [
"lowercase"
]
}
}
So by default your queries must be case insensitive, but you can allways try to set the value of search_analyzer to something else. In the docs:
Sometimes, though, it can make sense to use a different analyzer at search time, such as when using the edge_ngram tokenizer for autocomplete.
By default, queries will use the analyzer defined in the field mapping, but this can be overridden with the search_analyzer setting:

bryntum component integration

I am trying to integrate the bryntum component(schedule) in php. I am not much aware in ext js.
Please see the images here
Here, Name fields are fetching properly, whereas Capacity is not accessing. These values are coming from Zoho CRM.
My code is like Click, whereas r-read.php file is the responsible file for fetching the record from CRM and store it in a json format. It is like
{
"success": true,
"total": 9,
"root": [{
"Id": 1,
"Name": "Sri Test",
"Capicity": "190.0"
}, {
"Id": 2,
"Name": "tester_test01",
"Capicity": "500.0"
}, {
"Id": 3,
"Name": "Tesing room 23",
"Capicity": "5000.0"
}, {
"Id": 4,
"Name": "Test for 6th product",
"Capicity": "5000.0"
}, {
"Id": 5,
"Name": "Banquet hall test-01",
"Capicity": "500.0"
}, {
"Id": 6,
"Name": "test room",
"Capicity": "1000.0"
}, {
"Id": 7,
"Name": "Grande Ballroom",
"Capicity": "4000.0"
}, {
"Id": 8,
"Name": "Cedar Room",
"Capicity": "1400.0"
}, {
"Id": 9,
"Name": "Maple Room",
"Capicity": "1200.0"
}]
}
In the capacity column, it will show like 190.0 , 500.0, 5000.0 etc like Name column.
I'm not familier with the Bryntum schedular component, but most of the time when you have problems like these it's because you didn't define the Capacity field in your model.
I saw you used the following model: Sch.model.Resource. Can it be that is only has the Name field and not Capacity? Your JSON response looks fine to me.
In the sample JSON above, Capacity is spelled Capicity.
See if the same spelling needs can be used everywhere. Maybe then the data will resolve properly.

Which schema is better in web service API design

Recently, our team is going to develop mobile(iphone, android platforms) applications for our existing website, let user can use the application to more easy to read our content via the application.
But our team have different views in JSON schema of the API return, below are the sample response.
Schema type 1:
{
"success": 1,
"response": {
"threads": [
{
"thread_id": 9999,
"title": "Topic haha",
"content": "blah blah blah",
"category": {
"category_id": 100,
"category_name": "Chat Room",
"category_permalink": "http://sample.com/category/100"
},
"user": {
"user_id": 1,
"name": "Hello World",
"email": "helloworld#hello.com",
"user_permalink": "http://sample.com/user/Hello_World"
},
"post_ts": "2012-12-01 18:16:00T0800"
},
{
"thread_id": 9998,
"title": "asdasdsad ",
"content": "dsfdsfdsfds dsfdsf ds",
"category": {
"category_id": 101,
"category_name": "Chat Room 2",
"category_permalink": "http://sample.com/category/101"
},
"user": {
"user_id": 2,
"name": "Hello baby",
"email": "hellobaby#hello.com",
"user_permalink": "http://sample.com/user/2"
},
"post_ts": "2012-12-01 18:15:00T0800"
}
]
}
}
Schema type 2:
{
"success": 1,
"response": {
"threads": [
{
"thread_id": 9999,
"title": "Topic haha",
"content": "blah blah blah",
"category": 100,
"user": 1,
"post_ts": "2012-12-01 18:16:00T0800"
},
{
"thread_id": 9998,
"title": "asdasdsad ",
"content": "dsfdsfdsfds dsfdsf ds",
"category": 101,
"user": 2,
"post_ts": "2012-12-01 18:15:00T0800"
}
],
"category": [
{
"category_id": 100,
"category_name": "Chat Room",
"category_permalink": "http://sample.com/category/100"
},
{
"category_id": 101,
"category_name": "Chat Room 2",
"category_permalink": "http://sample.com/category/101"
}
],
"user": [
{
"user_id": 1,
"name": "Hello World",
"email": "helloworld#hello.com",
"user_permalink": "http://sample.com/user/Hello_World"
},
{
"user_id": 2,
"name": "Hello baby",
"email": "hellobaby#hello.com",
"user_permalink": "http://sample.com/user/Hello_baby"
}
]
}
}
Some Developers claim that if using schema type 2,
can reduce data size if the category & user entities comes too much duplicated. it does really reduce at least 20~40% size of response plain text.
once if the data size come less, in parsing it to JSON object, the memory get less
categoey & user can be store in hash-map, easy to reuse
reduce the overhead on retrieving data
I have no idea on it if schema type 2 does really enhanced. Because I read so many API documentation, never seen this type of schema design. For me, it looks like a relational database. So I have few questions, because I have no experience on designing a web services API.
Does it against API design principle (Easy to read, Easy to use) ?
Does it really get faster and get less memory resource on parsing on IOS / Android platform?
Does it can reduce the overhead between client & server?
Thanks you.
When I do such an application for android, I parse JSON just one and put it in database. Later I'm using ContentProvider to access it. In Your case You could use 2nd schema but without user, category part. Use lazy loading instead but it will be good solution just in case categories and users repeat often.

Categories