How to implement event driven code in PHP? - php

Is it possible to implement event driven program in PHP?
Something like javascript.
As an example, try to open a socket(open_socket) and execute some other command(do_something_else) instead of waiting for the success response of socket request.
After getting the success response execute callback_execute.
//--------------------------------------------------------------------
public function open_socket(){
$this->socketResource = fsockopen($this->nodeIp,$this->portNumber);
}
public function callback_execute(){
fputs($this->socketResource,$command);
}
public function do_something_else{ xxxxx }
//--------------------------------------------------------------------
Non_blocking_function($obj->open_socket(),$obj->callback_execute());
$obj->do_something_else();

There is only a single thread in PHP. Therefore doing something useful whilst waiting for some event is not possible in PHP.
Some workarounds are available but probably not very reliable – especially not when you plan to write portable code. I would assume the workarounds are risky since the language does not have a concept of concurrency. It's therefore probably best to write multi-threaded code in another language (Java, Scala, …) and use PHP just for displaying the prepared results (if using PHP at all for such problems).

Related

How does a Node.js server (like Express) manage memory as opposed to a PHP server?

From what I understand, basically, PHP server-side apps (PHP-FPM) load the entire app from scratch on every request and then close it down at the end of a request. Meaning that variables, containers, config and everything else gets read and built from zero in each separate request and there is no crossover. I can use this knowledge to structure the app better. For example, I would know that class statics hold their data only for the duration of the request and each new request will have its own value.
A Node.js server like Express.js works very differently, however. It is a single Node.js process that is running continually and listens for any new requests and passes them along to the correct handlers. This requires a different approach to development, as there is data that is kept in memory between requests. For example, class statics in such a case sound like they would hold data for the entire duration of the server uptime, not just for the duration of a single request.
So I have some questions about this:
Does it make sense to pre-load some data during Express.js startup (like reading private keys from file) so that it is already in memory when needed by a request and it would get re-used each time without being re-read from file? In a PHP server framework this wouldn't matter that much as everything gets built from 0 with each request.
How do I properly handle exceptions in a Node.js server process? If a PHP server script throws a fatal exception only that specific request dies, all other requests and any new ones run fine. If a fatal error happens in a Node.js server, it sounds like it would kill the entire process and thus all requests with it.
If you have any resources about how this topic, it'd be great if you could share them also.
1-
Does it make sense to pre-load some data during Express.js startup (like reading private keys from file) so that it is already in memory when needed by a request and it would get re-used each time without being re-read from file? In a PHP server framework this wouldn't matter that much as everything gets built from 0 with each request.
Yes, totally. You would bootstrap connections to databases, data read for files and similar tasks at application startup, so they are always available in every request.
There are some things to consider in this scenario:
During application startup, you can safely call synchronous methods, like fs.readFileSync etc, because there are not concurrent request on the single thread at this point.
CommonJS modules does cache their first value exported. So if you choose to use a dedicate module to handle secrets read from a file, database connections etc., you can:
secrets.js
const fs = require('fs');
const gmailSecretApiKey = fs.readFileSync('path_to_file');
const mailgunSecretApiKey = fs.readFileSync('path_to_file');
...
module.exports = {
gmailSecretApiKey,
mailgunSecretApiKey,
...
}
Then require this as your application startup. After this, any modules that does:
const gmailKey = require('.../secrets').gmailSecretApiKey won't read from file again. The results are cached in the module.
This is important because allow you to use require and import for consuming configuration in your controllers and modules, without bothering passing extra parameters to your http controllers or adding them to req objects.
Depending upon infrastructure, you may not be able to allow your application to not handling requests during startup (i.e. you have only one machine up and don't want to give service unavailble to your clients). In such cases, you can expose all the configuration and shared resources in promises, and bootstrap your web controllers as fast as possible, waiting for the promises inside. Let's say we need kafka up and running when handling a request on '/user':
kafka.js
function kafka() {
// return some promise of an object that can publish and read from kafka in a given port etc. etc.
}
module.exports = kafka();
So now in:
userController.js
const kafka = require('.../kafka');
router.get('/user', (req,res) => {
kafka.then(k => {
k.publish(req.user, 'userTopic'); // or whatever. This is just an example.
});
})
In this way, in the event an user makes a request during bootstrap, the request will still be handled (but will take some time). Requests made when the promise is already resolved won't notice anything.
There's no such thing as multiple threads in node. Anything you declare in a commonJS module or you write to process will be available in every request.
2-
How do I properly handle exceptions in a Node.js server process? If a PHP server script throws a fatal exception only that specific request dies, all other requests and any new ones run fine. If a fatal error happens in a Node.js server, it sounds like it would kill the entire process and thus all requests with it.
This really depends in the kind of exception that you find. It is specifically related to the request being processed, or is something critical for the whole application?
In the former case, you want to catch the exception and don't allow the whole thread to die. Now, 'catch the exception' in javascript is tricky, because you cannot catch asynchronous exceptions/errors, and you would likely use process.on('unhandledRejection') to handle that, like:
// main.js
try {
bootstrapMongoDb();
bootstrapKafka();
bootstrapSecrets();
... wahtever
bootstrapExpress();
} catch(e){
// read what `e` brings and decide.
// however, is worth to mention that errors raised during handling
// http request won't ever get handled here, because they are
// asynchronous. try/catch in javascript don't catch asynchronous errors.
}
process.on('unhandledRejection', e => {
// now here we are treating unhandled promise rejections, and errors that raise
// in express controllers are likely end up here. of course, I'm talking about
// promise rejections. I am not sure if this can catch Errors thrown in callbacks.
// You should never `throw new Error` inside an asynchronous callback.
});
Handling errors in node application is a whole topic on its own, too broad to be considered here. However some tips shouldn't do harm:
Never throw errors in callbacks. throw is synchronous. Callbacks and asynchrony should rely on an error parameter or a promise rejection.
You better get used to promises. Promises really improve error management in asynchronous code.
Javascript errors can be decorated with extra fields, so you can fill in trace id's and other id's that may be useful when reading logs of your system, given you will log your unhandled errors.
Now, in the latter case... sometimes there are failures that are totally disastrous for your app. Maybe you totally need a connection to a kafka or a mongo server, and if it is broken, then you may want to kill your application so clients receive a 503 when trying to connect.
Then, in some scenarios, you may want to kill your app, then let another service to reboot it when database is available again. This depends a lot on infrastructure and you may as well not kill your app never.
If you don't have a infrastructure that handles the health and reboot of your web service for you, it is probably safer to never let your application die. Said so, it's a good thing to at least use tools like nodemon or PM2 to ensure your app will relaunch after going down.
Bonus: why you should not throw errors in callbacks
Thrown errors propagates through the callstack. You have, let's say, function A who calls B, who in turn then calls C. Then C throw an Error. All of them only have synchronous code.
In such scenario, error propagates to B and, if it don't catch it, it propagates to A, and so on.
Now let's say that, instead, C doesn't throw an error by itself, but do call fs.readFile(path, callback). In the callback function, an error is thrown.
Here, when the callback is invoked, and the error thrown, A is already done and left the stack long ago, hundreds of milliseconds ago, maybe even more.
This means that any catch block in A won't catch the error, because is not even there already:
function bootstrapTimeout() {
try {
setTimeout(() => {
throw new Error('foo');
console.log('paco');
}, 200);
} catch (e) {
console.log('error trapped!');
}
}
function bootstrapInterval() {
setInterval(() => {
console.log('interval')
}, 50);
}
console.log('start');
bootstrapTimeout();
bootstrapInterval();
If you run that snippet, you would see how the error reach the top level and kill the process, even if the throw new Error('foo'); line was placed within a try/catch block.
error, result interface
Instead of using Errors to handle exceptions in asynchronous code, node.js has the standard behavior of expose an (error, result) interface for every callback you pass to an asynchronous method. If, for instance, fs.readFile happens to go wrong because the filename did not exist, it does not throw an error, it invokes the callback with the corresponding Error as the error parameter.
Like:
fs.readFile('notexists.png', (error, callback) => {
if(error){
// foo
}
else {
http.post('http://something.com', result, (error, callback) => {
if(error){
// oops, something went wrong with an http request
} else {
// keep working
// etc.
// maybe more callbacks, always with the dreadful 'if (error)'...
}
})
}
});
You always control errors in async operations in the callback, you should never throw.
Now this is a pain in the ass. Promises allow for much better error control because you can control async errors in one single catch block:
fsReadFilePromise('something.png')
.then(res => someHttpRequestPromise(res))
.then(httpResponse => someOtherAsyncMethod(httpResponse))
.then(_ => maybeSomeLoggingOrWhatever() )
.catch(e => {
// here you can control any error thrown in the previous chain.
});
And there's also async/await that allow you to mix async and sync code and treat promise rejections in catch blocks:
await function main() {
try {
a(); // some sync code
await b(); // some promise
} catch(e) {
console.log(e); // either an error throw in a() or a promise rejection reason in b();
}
}
However keep in mind that await is no magic and you really need to understand promises and asynchrony well in order to use it properly.
At the end, you always end up with one error control flow for synchronous errors via try/catch, and another for asynchronous errors, via callback parameters or promise rejections.
Callbacks can use try/catch when consuming synchronous api's, but should never throw. Any function can use catch to handle synchronous errors, but cannot rely on catch blocks to handle asynchronous errors. Kinda messy.
Does it make sense to pre-load some data during Express.js startup (like reading private keys from file) so that it is already in memory when needed by a request and it would get re-used each time without being re-read from file?
Yes it make sense if you structure your code to let these data be available in the request handler. In the following example, based on what i know, the staticResponse is readed only one time.
const express = require('express');
const staticResponse = fs.readFileSync('./data');
const app = express();
app.get('/', function (req, res) {
res.json(staticResponse);
});
app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});
How do I properly handle exceptions in a Node.js server process? If a fatal error happens in a Node.js server, it sounds like it would kill the entire process and thus all requests with it.
Exactly, an unhandled exception make the entire nodejs process crash. There are multiple ways to manage error, there isn't 'the one for all' solution. Depends on how you write you're code.
all requests with it => keep in mind that nodejs is single thread.
app.post('/', function (req, res, next) {
try {
const data = JSON.parse(req.body.stringedData);
// use data
res.sendStatus(200);
} catch (err) {
return next(err);
}
});

Profiling *every* function call?

You may wonder why I would want to do this. I'm trying to debug PHP performance on an embedded system. Don't have access to any kind of tools on the device.
I was thinking if I could just do a simple microseconds calculation on every call, it would work.
Is there a way to do it? Essentially wrap all of my functions (not built in php).
This wouldn't be for production of course.
You can use declare(ticks=1000); to run an callback, like:
// you can use this:
declare(ticks=1);
// A function called on each tick event
function tick_handler()
{
debug_backtrace();//get function name
microtime();//get time
}
http://www.php.net/declare
you only have to get the right number e.g. 1000 for your tests cycles

access PHP Methods with jQuery

I'm completely new to jQuery and Ajax, but I've managed to learn how to do the Hello World, populate a select tag, etc, etc...
Problem is, I don't like to use structural PHP. The way I learned I have to call some PHP file with $.getJSON and that file has to "echo" my result.
What I want is to be able to call a PHP file that is actually a class with some methods and the return of the method would be what JavaScript would receive instead of just an echo result.
Thanks for your attention.
Ps.: I have a lot of experience with PHP-OOP and Flex+PHP using Amfphp. I'm trying to build a different version of view and I would like to re-use the classes that Flex already use.
jQuery runs on your computer, and PHP runs on the server. PHP and jQuery can only communicate via a series of well-crafted strings. On the server, you are free to create objects, run methods, manipulate output, and anything else. However, if you're going to be feeding that data back into your jQuery application (still running on the client's machine), you'll need to echo (or output) the results of your PHP script.
You may consider something like this:
$.post('server.php', { 'class':'foo', 'method':'bar' }, function( response ) {
/* do something with the output of $foo->bar(); */
});
As you can see here, I can define the class and method I'd like to have called on the server. From server.php, we would look to $_POST['class'] and $_POST['method'] to determine what we will instantiate, and which methods we will run.
The AMF is somehow different from HTTP, they're different protocols.
When using AJAX (jQuery or not), you're calling HTTP methods on URIs, not OOP methods. So everything ends up in a minimum of two mappings:
Your application logic mapped to methods and URIs.
Your Javascript code mapped to methods and URIs.
Here is a sample using Respect\Rest:
$router->get('/users/*', function($userName) {
return MyDatabaseLayer::fetchUser($userName); //Illustrative
})->accept(
'application/json' => function($data) {
header('Content-type: application/json');
return json_encode($data);
}
);
Now the jQuery part:
$.getJSON('/users/alganet', function(user) {
alert(user.name);
});
You should use appropriate HTTP methods for different actions. Saving an user would be something like:
$router->post('/users/*', function($userName) {
return MyDatabaseLayer::saveUser($_POST['user']); //Illustrative
});
jQuery:
$.post('/users', $("$userform").serialize());
There are four main HTTP methods: GET, POST, PUT and DELETE. GET and POST are the most common ones.
There is a nice trivia: Both HTTP, REST and AMF were written by the same guy: Roy Fielding.

Best Practice for returning cross-site JSON response

I'm currently working on a small application that works like this:
When the user clicks a link, an Ajax GET request is fired.
The request hits a server-side PHP script.
The script, which requests information for another domain, retrieves a JSON feed.
The feed is then echoed back to the client for parsing.
I'm not really a PHP developer, so I am looking for some best practices with respect to cross-domain requests. I'm currently using file_get_contents() to retrieve the JSON feed and, although it's functional, it seems like a weak solution.
Does the PHP script do anything other than simply call the other server? Do you have control over what the other server returns? If the answers are No and Yes, you could look into JSONP.
You might want to abstract the retrieval process in PHP with an interface so you can swap out implementations if you need too. Here is a naive example:
interface CrossSiteLoader
{
public function loadURL($url);
}
class SimpleLoader implements CrossSiteLoader
{
public function loadURL($url)
{
return file_get_contents($url);
}
}
Comes in handy if you need to test locally with your own data because you can use a test implementation:
public ArrayLoader implements CrossSiteLoader
{
public function loadURL($url)
{
return json_encode(array('var1' => 'value1', 'var2' => 'value2'));
}
}
or if you just want to change from file_get_contents to something like curl

How to design error reporting in PHP

How should I write error reporting modules in PHP?
Say, I want to write a function in PHP: 'bool isDuplicateEmail($email)'.
In that function, I want to check if the $email is already present in the database.
It will return 'true', if exists. Else 'false'.
Now, the query execution can also fail, In that time I want to report 'Internal Error' to the user.
The function should not die with typical mysql error: die(mysql_error(). My web app has two interfaces: browser and email(You can perform certain actions by sending an email).
In both cases it should report error in good aesthetic.
Do I really have to use exception handling for this?
Can anyone point me to some good PHP project where I can learn how to design robust PHP web-app?
In my PHP projects, I have tried several different tacts. I've come to the following solution which seems to work well for me:
First, any major PHP application I write has some sort of central singleton that manages application-level data and behaviors. The "Application" object. I mention that here because I use this object to collect generated feedback from every other module. The rendering module can query the application object for the feedback it deems should be displayed to the user.
On a lower-level, every class is derived from some base class that contains error management methods. For example an "AddError(code,string,global)" and "GetErrors()" and "ClearErrors". The "AddError" method does two things: stores a local copy of that error in an instance-specific array for that object and (optionally) notifies the application object of this error ("global" is a boolean) which then stores that error for future use in rendering.
So now here's how it works in practice:
Note that 'Object' defines the following methods: AddError ClearErrors GetErrorCodes GetErrorsAsStrings GetErrorCount and maybe HasError for convenience
// $GLOBALS['app'] = new Application();
class MyObject extends Object
{
/**
* #return bool Returns false if failed
*/
public function DoThing()
{
$this->ClearErrors();
if ([something succeeded])
{
return true;
}
else
{
$this->AddError(ERR_OP_FAILED,"Thing could not be done");
return false;
}
}
}
$ob = new MyObject();
if ($ob->DoThing())
{
echo 'Success.';
}
else
{
// Right now, i may not really care *why* it didn't work (the user
// may want to know about the problem, though (see below).
$ob->TrySomethingElse();
}
// ...LATER ON IN THE RENDERING MODULE
echo implode('<br/>',$GLOBALS['app']->GetErrorsAsStrings());
The reason I like this is because:
I hate exceptions because I personally believe they make code more convoluted that it needs to be
Sometimes you just need to know that a function succeeded or failed and not exactly what went wrong
A lot of times you don't need a specific error code but you need a specific error string and you don't want to create an error code for every single possible error condition. Sometimes you really just want to use an "opfailed" code but go into some detail for the user's sake in the string itself. This allows for that flexibility
Having two error collection locations (the local level for use by the calling algorithm and global level for use by rendering modules for telling the user about them) has really worked for me to give each functional area exactly what it needs to get things done.
Using MVC, i always use some sort of default error/exception handler, where actions with exceptions (and no own error-/exceptionhandling) will be caught.
There you could decide to answer via email or browser-response, and it will always have the same look :)
I'd use a framework like Zend Framework that has a thorough exception handling mechanism built all through it.
Look into exception handling and error handling in the php manual. Also read the comments at the bottom, very useful.
There's aslo a method explained in those page how to convert PHP errors into exceptions, so you only deal with exceptions (for the most part).

Categories