Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have a php application [mostly REST] which runs on top of Apache in a Linux Virtual Machine. This application does a lot of Data queries and i have started having performance issues.
To me one way to address this is using NodeJs Async Patterns. I also plan to implement websockets. But the problem is code size in php is very large. It will take months to rewrite in Node.
Is there a middle ground to complete rewrite. Where i can handle interaction with browser in Node and interaction with database in php cli. and Node can call php cli with approximating Apache environment?
I am using Slim PHP Framework for the REST API, Both HTTP Basic Auth and PHP Sessions, $_GET and variables for extra filters on GET requests. I dont know much about internal workings of Slim. But i think it depends on Apache-PHP implementation of HTTP requests and responses.
How to send the message body [post, put] to the php cli which is in 99% cases JSON (I have file uploads too but which can be ignored as of now). i can have php cli put the json output in STDOUT and parse from there.
The real problem is how to remove dependency on php apache SAPI without changing much of the codebase and how to integrate it with Node. is there any tools, lib which can help in this case.
One more side question, can NGinx help me here somehow?
**Note - My knowledge of node is limited to few fun scripts and custom linting, template compiling, testing scripts for browser side code.
First you could put nginx in front of Apache. This will allow you to slowly transition your actions to node by routing selectively to one or the other.
Alternatively you could put node in front and use node-http-proxy with express (for exemple) to proxy selectively to Apache. I haven't tried it myself but I guess it should work.
You could also/or use dnode to call php functions from node. zeromq is an option, too.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I want to create own web server to accept HTTP requests and send response to client.
I have idea to call PHP scripts by CLI interface but then PHP scripts executed in CLI mode. What I need do for call scripts by my own server (not in CLI mode)? Because in CLI mode some PHP abilities is disabled. Maybe I need to write own PHP SAPI?
Please help me to start.
Thanks
No, you don't need to write your own PHP SAPI, as there has already been one written specifically for interfacing with a web server. It's called FastCGI.
It's important to note that PHP is both extensible and embedded by design. So, while some SAPIs like the Apache 2.0 Handler (a.k.a mod_php) might be embedded in the httpd server directly, it is not typically necessary to do so in order to have the web server talk to PHP.
The difference is you still need some process to manage the underlying PHP interpreter and deal with things like recycling workers or managing the number worker processes available for the webserver to talk to. For example, php-fpm does this quite nicely and many people use php-fpm to manage the PHP workers and just tie that to their webservers like nginx or httpd via the fastcgi protocol. The PHP workers forked by php-fpm listen on a fastcgi socket and the webserver can freely send and receive information from and to PHP.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I want to build a server that will listen to a custom port and talk with my web application through a custom protocol. The server will have a dispatcher and workers that will undertake a task and complete it.
Is Laravel up for the job or should I go with something more specific ?
EDIT:
I would like to clarify that it's not an API. Basically a php script will run on loop in CLI mode (meaning no Apache or NGINX involved here). The script will open a socket and listen on a certain port for connections from clients. Once a client connects, the server will start some jobs and send the answer. It also involves a job queue to which the server will connect(probably a database), get the jobs and fork new processes that will complete the jobs.
EDIT:
It seems that you don't need much of a framework at all (except maybe for the database operations part. since you if you use sockets you will (probably) not use much of the framework's functionality like routing, view templating...) Depending on the complexity of your database I'd use a framework or not. If it's very complex, features like Eloqent might help... I think you should think on how much of the framework you will use and if you can only take the stuff you need trough Composer instead.
END EDIT
Should you use Laravel/PHP to build a server - it will be probably too slow for that purpose.
1) If you want to make your own server (not website or API) I'd much rather go for Node.js or something along those lines (ruby, python, C#..)
2) By "custom protocol" I assume you don't mean something different than HTTP/ TCP/IP ? Then what do you mean by a "custom protocol" ?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm searching for a means to overpass the need for a web server when using a local web application. Why aren't there browser extensions or special browsers to do so? It seems very easy to code to me. The browser or extension would call a php interpreter to compose web pages from php files. Local urls (file:///) would be used. No web service or port would be necessary. Is this just nonsense? Or am I the first person to think about this? In fact, this does already work with static html files.
First edit: I was looking for a server for testing purposes, being able to compose output from php files, without the need to communicate through network ports. Maybe I should have started from here, but Stackoverflow does not allow this kind of posts.
That sounds nice, please go ahead and build that technology. I will also use it. But oh, for now that has nothing to do with SO.
Why web servers are needed even for local web application
Because they are web applications. Technically you don't need a web server for local stuff if all your application code relies on client side programming. You can just go ahead open your html files in browser.
Browsers know how to interpret client side code already, so why re-invent the wheel? If you ever wish to write 1 line of server side code then obviously you'd need a web server at that point.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
When I write web app on PHP it can works in different modes: like FCGI (php-fpm), like apache module (mod_php) and so on. In all this cases when I am editing my PHP scripts application updates immediatelly without need to restart a server. I know that Ruby web app also can works in different modes (i.e. FCGI withing Unicorn + Nginx). But I want to get an idea about all the most common ways to launch Ruby web app and techincal details about all this means (i.e. when I should restart my server to update scripts), their proc and cons.
There are many ways to write Ruby applications, classic 1990s-style CGI, FastCGI, independent HTTP-capable processes with Mongrel or Thin and the more modern, recommended approach which uses a launcher like Passenger to manage the processes more directly.
Ruby on Rails can have several operating modes. The default two are development and production. These have some important differences:
In development:
Anything in app/ or config/routes.rb is reloaded for each request.
The log/development.log is as verbose as possible, recording each query executed.
Assets are served up in their raw form, can be changed any time.
In production:
Application is loaded once and cached, any changes require a restart.
The log/production.log file contains only errors and important notifications.
Assets are minified, compressed, and served up in bundles. Even minor changes require repackaging everything.
It's generally a bad idea to be editing scripts on a production server, so the Rails approach is usually a good thing. If you need to rapidly iterate on something, do it on your local copy and push the changes to the server only when you're confident they'll work.
If you really need to emergency fix something on the server, you can edit the code, then touch tmp/restart.txt to force a reload. When you change assets you'll have to run rake assets:precompile to repackage them.
Like a lot of things, Rails makes the recommended approach easy and the risky one hard.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
In the middle of 2010, I found a class library called PHPToCLib. It ran flawlessly for over a year - I was able to implement a tremendous amount of my own, custom code into a an AIM Bot that I could run from my CMD Prompt. However, near the end of 2011, the servers stopped responding to the script. It connects to toc.oscar.aol.com on port 5190, and that hasn't been changed. I am indeed aware that AOL discontinued their TOC2 servers and that it's not possible to connect with them anymore. However, I downloaded a program called TerraIM that uses the same specifications and is somehow able to connect to them. I was wondering if there were any updates on how I could get my script to connect, and if so, what do I need to change?
Thank you in advance.
TerraIM also supports the OSCAR protocol which I assume it's defaulting to. If you are working with IM bots the absolute best way to go is to leverage libpurple. Unfortunately there is not a good PHP binding to libpurple. There are a couple python bindings. If you don't wish to migrate your code, there is an implementation that provides an HTTP interface which may be easy to integrate with depending on your use case. Alternatively, you could use thrift to comminute between your existing PHP code and the python bindings -- this would require a bit more coding than leveraging an HTTP interface. Here are some resources you may find helpful:
Python bindings:
github.com/fahhem/python-purple
github.com/Raptr/Heliotrope
HTTP interface from HTTP binding:
github.com/atamurad/http-purple
Thrift:
http://thrift.apache.org/