Is there any way to post multiple item's in one request? - php

I'am developing a mobile web application Tasks (for iPhone) with a local database (using json files) so my app is still usable when the user is offline.
This is working perfectly, but I want to save the local data on a server. So I need to synchronize the local DB with a DB on a server (using REST(ful)way).
What I want to do is:
Collect all tasks and send to the server. At the moment I see two options to do this:
Send each task to the server: POST /tasks
I actually don't want to do this because I want to limit the number of requests to the server so option 2:
Collect all the tasks and send them to the server at once.
Is there any way to do this with (maybe with slimframework php) ?

I guess that you want to do some bulk updates on your RESTful application.
In fact, the method POST on the list resource /tasks is generally used to add an element but it can also be used to add more than one element. In such case, you need to support a parameter (something in a dedicated header) to determine which "action" to execute on the method POST.
A method PATCH can also be used for such use case. This is typically designed for this and can contain a list of operations to do for elements (add, delete, update).
I think that these two answers could give you some more hints:
REST API - Bulk Create or Update in single request - REST API - Bulk Create or Update in single request
How to Update a REST Resource Collection - How to Update a REST Resource Collection
Hope this helps you,
Thierry

Related

How to efficiently construct HTML templates when all the datas come from an internal API?

Here's the context : we're actually using the basic web stack, and our website builds HTML templates with datas it gets from the database directly.
For tons of reasons, we're splitting this into two projects, one will be responsible for talking with the database directly, the other one will be responsible for displaying the datas.
To make it simple, one is the API, the other one is the client.
Now we're wondering about how we should ask our API for datas. To us, there are 2 totally different options :
One request, one route, for one page. So we would get a huge object to use which would contain everything needed to build the corresponding page.
One request for one little chunk of data. For example on a listing page, we'd make one request to get datas about the current logged user and display its name along with its avatar, then another request to get every articles, another request to get datas about the current page category...
Some like the first option, I don't like it at all. I feel like we're going to have a lot of redundance. I'm also not sure one huge request is that much faster than X tiny requests. I also don't like binding data to a specific page, as I feel like the API should be (somewhat) independant from our front website.
Some also don't like the second option, they fear we overcharge the server by making too many calls, and I can understand this fear. It also looks like it'll be hard to properly define the scope of what to send, what to not send without any redundancy. If we're sending only what's needed to display a page, isn't that the first option in the end ? But isn't sending unneeded information a waste ?
What do you guys think ?
The first approach will be good if getting all data is fast enough. The less requests - the faster app. Redundancy I think you mean code redundancy because sending the same amount of data in one request will be definitely faster than in 10 small non-parallel ones (network overhead). If you send a few parallel requests from UI you can get performance gain of cause. And you should take into account that browsers have some limitations for parallel requests.
Another case if getting some data is fast but another is slow you can return the first data and on UI show loading image and load the second data when it will come. It will improve user experience showing the page as fast as possible.
The second approach is more flexible as you can use some requests from other pages. But it comes with price - logic with making these requests (gathering information) you need to move to UI code making it more complex. And if you need the same data on another app like mobile you have to copy this logic. As a rule creating such code on backend side is easier.
You can also take a look at this pattern which allow you to locate business/domain logic inside one service and “frontend friendly” logic to another service (orchistration service).

Javascript & PHP I need advice on how to seperate data_type before sending to server

I am trying to write a web game using WebSocket API
In this game containing two players, I am running two jobs through one server:
one is a simple game where users can choose between two options
the other is a chat box.
Both of these two jobs will send data to a server script to process the data.
To send this data, I use WebSocket's method .send(data). My problem is that two jobs have to be handled separately in the server; but the .send(data) method does not have a parameter for me to differentiate event_name, so the server will treat the two submitted data equally.
Please suggest me a way to tell the server to differentiate two jobs. I have thought about putting a prefix (ex. chat_*, game_*, but if the users know my prefix they can screw up my server.
What should I try next?
Yeah prefix should work just fine. .send(data) method also take extra care to handle bad data input.

Can I use Laravel 4 Queues to process print-jobs on Windows?

This question is a continuation to questions I have asked previously with regard to printing documents via Word on Windows from Laravel.
My issue was that I did not want to launch the necessary printing tasks within a POST request as this would show no feedback of the task, and would only respond once the task completed.
For example, if I called the POST /pledge/submit route, I would not want to call the necessary tasks for printing within that same request for the route.
Now, I see that Laravel 4 has a facility called Queues, which (I assume) would allow me to background process these tasks, and postpone them until a later time.
Having read through the documentation, I see that it supports four different drivers, one of which is sync.
Question: Can I use this driver to create new print jobs in the queue, and have them executed by an external application (such as one created in Delphi)? The app would periodically check to see if there are items in the queue, and then execute them (and, of course, remove them).
I am simply trying to find the best way to publish documents without the end-user having to wait for the page to respond whilst printing is underway. Further, I am new to queues in PHP, and am not familiar with how they work (in so far as a detailed process flow). If someone could also explain this, I would appreciate it very much.
The queue system wouldnt work for your Delphi program out of the box - you would need to make some modifications.
Instead - the easiest way would be to make your own 'table' in your database, called 'pending_print_jobs'.
When the user wants to print job 'x' - you get PHP to save the print job in the 'pending_print_jobs' table with all the details you need (such as file to be printed, the user who did it etc etc).
Then you would get your external application (i.e. your Delphi program) to periodically check the 'pending_print_jobs' table in your database. If it finds any records - it can action them - and print the file.

Using node with MySql OR using node with PHP (which will handle the MySql operations)

I have a node.js server that serves an interactive HTML page to the client browser. SO in this case, the client frontend page is not served from the Apache server but is served from node.js server.
When user perform some actions (i.e. create a wall post or comment), I am thinking of using one of the possible flow of operations to manage them, but I have no idea which one is better (in term of performance, scalability for large amount of users, and ease of implementation).
So these are my options:
OPTION 1:
Socket.io will send the message details to node server (which listens to a particular port). Message will be encoded with JSON.
Node receives the message.
Node will communicate directly to the mysql database. Input sanitization is performed at this step.
Node broadcast the details to the other users who are subscribed within the same socket.io room.
OPTION 2:
Socket.io will send the details to node server. Message will be encoded with JSON.
Node receives the message.
Node calls PHP using HTTP POST. The PHP controller will then handle the HTTP POST message and will save all the corresponding details to the mysql databsase. Input sanitization will be performed at this step.
PHP will notify node.js server via redis, let's say using a channel called "mysqldb".
node will subscribe to "mysqldb" channel, and will then broadcast the corresponding details to the users who are subscribed within the same socket.io room.
Can you advise me the advantage and disadvantages of the above 2 approach? Is there any better approach?
Counter questions:
How many steps are required for the same action in each of you options until everything is through?
Think about all the things that could go wrong / fail / break for each of the options?
How much easier would it be if everything is written in one language?
How much time could be saved if data is not passed arround between node, PHP, MySQL and redis?
How many software packages do you have to install and configure for each of the options?
Unless you have a very very specific reason to plug PHP into fully functional software stack and actually do not know why this question even was asked.
Short version: Option 1

Execute PHP script automatically whenever database changes

THE ESSENCE
I have a table in my database (SQL Server) and a WCF service that communicates with DB to affect it. I also have a PHP script on my server. Now whenever my table has some data added, modified or deleted I want my PHP script to be executed automatically.
ITS APPLICATION
I am working on a mobile application that is almost completed. Now I need to implement a push notifications feature. I.e. whenever there is a change in the database I have to run my server side script that is configured to push a notification to the user.
Push servicing should be implemented in the layer that adds the data to the database. A database is merely for storing information not for executing code. So in order to create push notifications you would need an interface that receives the updates, sends it to the database and pushes it to the users.
Seeing your question, I certainly don't hope you allow your app to directly add information to the database without some layer that validates incoming information. That's a direct security risk.
A solution of a model would be:
App -> send information -> Website -> analyzes request in PHP/other code languages -> inserts in Database -> push message to clients.
You don't say what RDBMS you are using but whatever one it is, you need to read the documentation about "triggers". This should give you all you need to know.
Most of the time this is done using cron (or another scheduler) to check a database on a certain interval (every minute for example) to find new work and then process it.
However, that sort of design only gets you so far and the next stage would be to move up to using a message queue (Like gearman, ZeroMQ, etc).
You might be able to make something work with triggers, but generally, a database should not be treated as a queue.

Categories