So I'm performing a query and getting data back like this:
[
{ "part_number": "MAC0009", "description": "Accessory Stand Foot" },
{ "part_number": "MAC0010", "description": "Accessory Stand Collar Tapped M5" },
{ "part_number": "MAC0011", "description": "Accessory Stand Top Collar" },
{ "part_number": "MAC0012", "description": "25mm Round Knob With 2 Rail Holes" }
]
However for the AJAX script I'm trying to implement I need the data in this format:
[
{ "MAC0009" : "Accessory Stand Foot" },
{ "MAC00010" : "Accessory Stand Collar Tapped M5" },
{ "MAC00012" : "Accessory Stand Top Collar" }
]
So basically I need plain data back without the table names.
All I have so far is the query.
$result = DB::table('macs')->select('part_number', 'description')->get();
Which is obviously fine but I don't know how to manipulate the data into that format :/ Any help appreciated.
You can use lists method:
DB::table('macs')->select('part_number', 'description')->lists('description', 'part_number');
Related
My API call returns a pretty big JSON result and my initial thought was to parse out the two pieces of data I need for each event and create my own array. Does it make more sense to pass around the returned JSON or clean it up for my use throughout the application.
Which is more efficient?
Below is an example of one "Event" each result may have 20-50 events in the data. All I need is the ['resultsPage']['results']['event']['location']['lng'] and ['resultsPage']['results']['event']['location']['lat']:
{
"resultsPage": {
"results": {
"event": [
{
"id":11129128,
"type":"Concert",
"uri":"http://www.songkick.com/concerts/11129128-wild-flag-at-fillmore?utm_source=PARTNER_ID&utm_medium=partner",
"displayName":"Wild Flag at The Fillmore (April 18, 2012)",
"start": {
"time":"20:00:00",
"date":"2012-04-18",
"datetime":"2012-04-18T20:00:00-0800"
},
"performance": [
{
"artist": {
"id":29835,
"uri":"http://www.songkick.com/artists/29835-wild-flag?utm_source=PARTNER_ID&utm_medium=partner",
"displayName":"Wild Flag",
"identifier": []
},
"id":21579303,
"displayName":"Wild Flag",
"billingIndex":1,
"billing":"headline"
}
],
"location": {
"city":"San Francisco, CA, US",
"lng":-122.4332937,
"lat":37.7842398
},
"venue": {
"id":6239,
"displayName":"The Fillmore",
"uri":"http://www.songkick.com/venues/6239-fillmore?utm_source=PARTNER_ID&utm_medium=partner",
"lng":-122.4332937,
"lat":37.7842398,
"metroArea": {
"id":26330,
"uri":"http://www.songkick.com/metro_areas/26330-us-sf-bay-area?utm_source=PARTNER_ID&utm_medium=partner",
"displayName":"SF Bay Area",
"country": { "displayName":"US" },
"state": { "displayName":"CA" }
}
},
"status":"ok",
"popularity":0.012763
}, ....
]
},
"totalEntries":24,
"perPage":50,
"page":1,
"status":"ok"
}
}
My subjective answer is to just use the entire response in your application, grabbing only what you need when you need it. Taking the time to extract only the data you need might be an unnecessary optimization, and your time could be better spent elsewhere.
Optimize only what you measure. If you can measure your application execution time, perhaps with the help of a profiler, like this one with Xdebug, then you can use data to make an informed decision to optimize in this way. My guess is that your application could use optimizations elsewhere before you make this one, but again, without data, it's just a guess.
I have some data for mixer like:
[
{
"sample": "sample1.mp3",
"time": 0
},
{
"sample": "sample2.mp3",
"time": 0.3342
},
{
"sample": "sample3.mp3",
"time": 0.3342,
// stop after means interrupt play this sample on this time in time scale
"stop_after": 0.3442
},
{
"sample": "sample4.mp3",
"time": 1.22443
},
{
"sample": "sample1.mp3",
"time": 1.223434,
"stop_after": 1.224244
}
]
What is the easiest way to create a single file mp3. I will write a wrapper for this on PHP. I just need to understand the right approach and the algorithm for this.
Is there any way to limit the results returned by the Google Books API?
For example the following URL:
https://www.googleapis.com/books/v1/volumes?q=isbn:0751538310
Returns the following:
"kind": "books#volumes",
"totalItems": 1,
"items": [
{
"kind": "books#volume",
"id": "ofTsHAAACAAJ",
"etag": "K6a+5IuCMD0",
"selfLink": "https://www.googleapis.com/books/v1/volumes/ofTsHAAACAAJ",
"volumeInfo": {
"title": "Panic",
"authors": [
"Jeff Abbott"
],
"publisher": "Grand Central Publishing",
"publishedDate": "2006",
"description": "Things are going well for young film-maker Evan Casher - until he receives an urgent phonecall from his mother, summoning him home. He arrives to find her brutally murdered body on the kitchen floor and a hitman lying in wait for him. It is then he realises his whole life has been a lie. His parents are not who he thought they were, his girlfriend is not who he thought she was, his entire existence an ingeniously constructed sham. And now that he knows it, he is in terrible danger. So he is catapulted into a violent world of mercenaries, spies and terrorists. Pursued by a ruthless band of killers who will stop at nothing to keep old secrets buried, Evan's only hope for survival is to discover the truth behind his past. An absolute page-turner, Panic has been acclaimed as one of the most exciting thrillers of recent years.",
"industryIdentifiers": [
{
"type": "ISBN_10",
"identifier": "0751538310"
},
{
"type": "ISBN_13",
"identifier": "9780751538311"
}
],
"readingModes": {
"text": false,
"image": false
},
"pageCount": 408,
"printType": "BOOK",
"categories": [
"Austin (Tex.)"
],
"maturityRating": "NOT_MATURE",
"allowAnonLogging": false,
"contentVersion": "preview-1.0.0",
"imageLinks": {
"smallThumbnail": "http://books.google.com/books/content?id=ofTsHAAACAAJ&printsec=frontcover&img=1&zoom=5&source=gbs_api",
"thumbnail": "http://books.google.com/books/content?id=ofTsHAAACAAJ&printsec=frontcover&img=1&zoom=1&source=gbs_api"
},
"language": "en",
"previewLink": "http://books.google.co.uk/books?id=ofTsHAAACAAJ&dq=isbn:0751538310&hl=&cd=1&source=gbs_api",
"infoLink": "http://books.google.co.uk/books?id=ofTsHAAACAAJ&dq=isbn:0751538310&hl=&source=gbs_api",
"canonicalVolumeLink": "https://books.google.com/books/about/Panic.html?hl=&id=ofTsHAAACAAJ"
},
"saleInfo": {
"country": "GB",
"saleability": "NOT_FOR_SALE",
"isEbook": false
},
"accessInfo": {
"country": "GB",
"viewability": "NO_PAGES",
"embeddable": false,
"publicDomain": false,
"textToSpeechPermission": "ALLOWED",
"epub": {
"isAvailable": false
},
"pdf": {
"isAvailable": false
},
"webReaderLink": "http://books.google.co.uk/books/reader?id=ofTsHAAACAAJ&hl=&printsec=frontcover&output=reader&source=gbs_api",
"accessViewStatus": "NONE",
"quoteSharingAllowed": false
},
"searchInfo": {
"textSnippet": "An absolute page-turner, Panic has been acclaimed as one of the most exciting thrillers of recent years."
}
}
]
Is there any way I can return only the title and description? I think it may improve performance of my web application.
I have looked at the partial response but it doesn't seem to work.
I am including my API key in the URL query parameter.
Thanks
I added the params according to the partial response documentation.
See the params in following link:
https://www.googleapis.com/books/v1/volumes?q=isbn:0751538310&fields=items(volumeInfo/description,volumeInfo/title)
It will return:
{
"items": [
{
"volumeInfo": {
"title": "Panic",
"description": "Things are going well for young film-maker Evan Casher - until he receives an urgent phonecall from his mother, summoning him home. He arrives to find her brutally murdered body on the kitchen floor and a hitman lying in wait for him. It is then he realises his whole life has been a lie. His parents are not who he thought they were, his girlfriend is not who he thought she was, his entire existence an ingeniously constructed sham. And now that he knows it, he is in terrible danger. So he is catapulted into a violent world of mercenaries, spies and terrorists. Pursued by a ruthless band of killers who will stop at nothing to keep old secrets buried, Evan's only hope for survival is to discover the truth behind his past. An absolute page-turner, Panic has been acclaimed as one of the most exciting thrillers of recent years."
}
}
]
}
maxResults
include this in your query. 5 is just an integer.
&maxResults=5
Google will help you create your API with this website API called try it.
https://developers.google.com/books/docs/v1/reference/volumes/list?apix=true#try-it
Maybe its too late to respond but you need to activate the API before accessing it. When you try to access it says to activate it from console with some project id. Just copy that url and it takes you straight to your dashboard where you can find your activation button post which you can access the partial response with desired attributes.
I'm using ElasticSearch's PHP client and I find really difficult to return results with scores whenever I want to search for a word that is "hidden" within a string.
This is an example:
I want to get all the documents where the field "file" has the word "anses" and files are named like this:
axx14anses19122015.zip
What I know about it
I know I should tokenize those words, can't realize how to do it.
Also I've read about aggregations but I'm really new to ES and I have to deliver a working piece ASAP.
What I've tried so far
REGEXP: using regular expressions is very expensive and does not return any scores, which is a must-to-have in order to shrink results and bring the user accurate information.
Wildcards: same thing, slow and no scores
Own script where I have a dictionary and search for critical words using regexp, if match, create a new field within that matched document with the word. The reason is to create a TOKEN so in future searches I can use regular match with scores. Negative side: the dictionary thing was totally denied by my boss so I'm here asking for any ideas.
Thanks in advance.
I suggest in your case nGram tokenizer see the example
I will create a analyzer and a mapping for a doc type
PUT /test_index
{
"settings": {
"number_of_shards": 1,
"analysis": {
"tokenizer": {
"ngram_tokenizer": {
"type": "nGram",
"min_gram": 4,
"max_gram": 4,
"token_chars": [ "letter", "digit" ]
}
},
"analyzer": {
"ngram_tokenizer_analyzer": {
"type": "custom",
"tokenizer": "ngram_tokenizer",
"filter": [
"lowercase"
]
}
}
}
},
"mappings": {
"doc": {
"properties": {
"text_field": {
"type": "string",
"term_vector": "yes",
"analyzer": "ngram_tokenizer_analyzer"
}
}
}
}
}
after that I`ll insert a document using your file name
PUT /test_index/doc/1
{
"text_field": "axx14anses19122015"
}
now I`ll just will use a query match
POST /test_index/_search
{
"query": {
"match": {
"text_field": "anses"
}
}
}
and will receive a reponse like this
{
"took": 8,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0.10848885,
"hits": [
{
"_index": "test_index",
"_type": "doc",
"_id": "1",
"_score": 0.10848885,
"_source": {
"text_field": "axx14anses19122015"
}
}
]
}
}
What i did?
i just created a nGram tokenizer that will explode our string in 4 characters terms and will index this terms separated and they will be searched when I search a part of the string.
To see more, read this article https://qbox.io/blog/an-introduction-to-ngrams-in-elasticsearch
Hope it help!
Ok after trying -so- many times it worked. I'll share the solution just in case someone else needs it. Thank you so much to Waldemar, it was a really good approach and I still cannot see why it's not working.
curl -XPUT 'http://ipaddresshere/tokentest' -d
'{ "settings":
{ "number_of_shards": 1, "analysis" :
{ "analyzer" : { "myngram" : { "tokenizer" : "mytokenizer" } },
"tokenizer" : { "mytokenizer" : {
"type" : "nGram",
"min_gram" : "3",
"max_gram" : "5",
"token_chars" : [ "letter", "digit" ] } } } },
"mappings":
{ "doc" :
{ "properties" :
{ "field" : {
"type" : "string",
"term_vector" : "yes",
"analyzer" : "myngram" } } } } }'
Sorry for bad indentation, I'm really hurry but want to post the solution.
So, this will take any string from "field" and split it into nGrams with lenght 3 to 5. For example: "abcanses14f.zip" will result in:
abc, abca, abcan, bca, bcan, bcans, etc... until it reaches anses or a similar term which is matcheable and has a score related to it.
I am trying to develop a web application that can fetch data from Asana and generate custom spreadsheet reports. This wrapper class was very helpful in making things simple.
However, I am having a hard time in writing code that gets me the team/s that a particular task belongs to. Even when I export data as JSON through Asana's web application the 'teams' find no mention. From what I understand, Asana itself does not provide an association between teams and tasks. Please correct me if I am wrong.
But if I am right at my conclusion, is there a workaround I could use? Teams are an important part of my data rendering and I need them to be mapped correctly in my reports that I am trying to generate from Asana. The report I want to generate would be hierarchical in nature.
Organisation
Team
Projects
Tasks
Subtask
Can I do something to achieve this hierarchy? The only place I get stuck is getting the projects under a particular team.
Glad to hear that you found that wrapper useful. We will be releasing a PHP Library ourselves soon that you may be interested in. Stay tuned!
Below is some pseudo-code to derive the hierarchy you are looking for, I think. Let me know if it helps.
GET /workspaces
{
"data": [
{
"id": 1234,
"name": "Startup Inc"
}
]
}
GET /workspaces/1234
{
"data": {
"id": 1234,
"name": "Startup Inc",
"is_organization": true,
...
}
}
Because is_corganization is true, we can then continue...
GET /organizations/organization-id/teams
{
"data": [
{
"id": 9876,
"name": "Ninja Team"
}
]
}
GET /teams/9876/projects
{
"data": [
{
"id": 5678,
"name": "Stealth Project"
}
]
}
GET /projects/5678/tasks
{
"data": [
{
"id": 8675309,
"name": "Top secret video"
}
]
}
GET /tasks/8675309
{
"data": {
"id": 8675309,
"created_at": "2015-03-25T17:28:59.255Z",
"modified_at": "2015-05-15T03:13:28.754Z",
"name": "Top secret video",
"notes": "https://www.youtube.com/watch?v=6WTdTwcmxyo",
"completed": false,
... # All the task data
]
}
}