I was checking a part of my application in which I connect to elasticsearch host server and then I realized for every time the front-end sends an report request to my back-end I'm creating an instance of elasticsearch client class using the following code :
$elasticClient = ClientBuilder::create()->setHosts($this->setHostsParams())->build();
Since our application sends about 20 requests to the back-end by loading the first page, I was considering if PHP's elasticsearch library might be capable of optimizing the initiation phase, or if anyone has a better solution for this, or it might not be a big of a deal after all and it's not a real overhead!?
PS : I did some research with it and didn't find any resources covering this subject.
Sharing an object instance is already discussed here and elsewhere so I'm not going to go into that.
What I'd point out, though, is there there's an elasticsearch API called _msearch which enables you to send multiple search payloads at the same time and the system will respond after all the individual requests have resolved. Here's some sample PHP usage.
This might be useful if you need all your ~20 requests resolved at once -- though it may be useless if you defer some of those requests only after, say, a user scrolls down and what not.
Related
I am working on a desktop application with electron and I am considering online storage to store data. I would like to get some idea on the approach as I couldn't find reliable answers from google search.
Approach 1. electron app (front end ) + php (like purchase a hosting package from godaddy with a domain e.g: www.mysite.com)
with this approach I am planning to create api calls in php to perform basic CRUD.
is this is a good way?
will this affect the speed/load time?
are there better ways for this situation?
Thank you very much in advance for the help.
Well, this is not an easy topic. Your solution could work: you Electron app ask your server for data and store data to it. Anyway the best solution depends from your application.
The most important points that you have to ask yourself are:
How often do you need to reach your server ?
Your users could work without data from server ?
How long does it takes to read and store data on your server ? (it's different if you store some kb or many gb of data)
The data stored online must be shared with other users or every user has access to its own data ?
If all the information are stored in your server your startup have to wait for the request to complete but you can show a loader or something like this to mitigate the waiting.
In my opinion you have many choices, from the simplest (and slowest) to the most complex (but that mitigate network lag):
Simple AJAX requests to your server: as you described you will do some HTTP requests to your server and read and write data to be displayed on your application. Your user will have to wait for the requests to complete. Show them some loading animations to mitigate the wait.
There are some solutions that save the data locally to your electron installation and then sync them online, Have a check to PuchDB for an example
Recently I'm looking at GraphQL. GraphQL is an API to query your data. It's not that easy but it has some interesting features, it has an internal cache and it's already studied for optimistic update. You update your application immediately thinking that your POST will be fine and then if something goes wrong you update it accordingly.
I'd also like to suggest you to try some solutions as a service. You don't have a server already and you will have to open a new contract so why don't you check some dedicated service like Firebase? Google Firebase Realtime Database allows you to work in javascript (just one language involved in the project), sync your data online automatically and between devices without the need to write any webservice. I'have just played with it for some prototypes but it looks very interesting and it's cheap. It also has a free plan that it's enough for many users.
Keep in mind that if your user has access only to their data the fastest and easies solution is to use a database inside your electron application. A sqlite database, an IndexDB database or even serialize to JSON and then store everything in localstorage (if your data fits size limits).
Hope this helps
I'm building an MVC web application (let's call it xyz.com). I want it to also support mobile apps via an API on the same server (let's call this api.xyz.com).
I'm confused about how to structure the web app vs. the API:
Should the web app use the API to query the database? Or should it perform them independently (like using a Model)? I mean, should the flow be (User > Controller > API > Database) or (User > Controller > Model > Database)?
If we use the APIs to query the database, how would you query the API? Would you use something like cURL?
If we use cURL, wouldn't that slow down the process? (As we are making a second request from the controller)? What's the ideal way to do this?
I've tried reading up on API-centric web apps, but there's not too much information about this on the net.
Any guidance would be appreciated.
since I have not enough points to comment I'll just leave my thoughts here.
1) I would make the web app use the API for 2 main reasons:
I don't like having multiples components accessing my database, you'll most likely have a lot of duplicate code on the data extraction and if you have a lot of filtering you might make mistakes and they'll behave differently.
I believe that (if I'm wrong please correct me) it'll be easier to increase performances on the app rather than the database. Since the API will handle resources it'll be easier to cache things than a full HTML web page.
2) To query the API I would most likely use composer and find a nice HTTP library, not sure which one though since I didn't use php in a while (symfony's and laravel's are quite nice if I remember well).
3) Yeah it might slow down the process a little but I believe it won't be enough to be noticed by the user. As I said above, with a correct cache handling you'll do just fine.
Hope having my thoughts on this can help, if I was wrong somewhere please feel free to correct me in the comments below (Don't know if I'll be able to respond tho... :s).
Have a nice day !
You really want to know the difference between an n-tier architecture and a single-tier architecture. An n-tier architecture is comprised of several tiers, which are connected via an internal protocol and API. E.g.:
backend frontend
data --- HTTP server --- HTTP client --- application --- HTTP server --- browser
application
You could have many more tiers in there. Those tiers can all run on the same physical machine, but they still talk to each other over HTTP; more likely you'd want to run each tier on a different machine though.
Yes, this will obviously incur some overhead in talking to your database backend over HTTP instead of doing it within the same PHP process. However, this is offset by:
Caching is built-in in this architecture. If you make proper use of HTTP caching by using a fully capable HTTP server and client and are using the HTTP cache mechanism well, you can reduce the absolute amount of queries made enormously. Practically you'd set up a reverse proxy on the web server on your frontend server, so your queries go PHP → curl → web server reverse proxy → backend server. If the reverse proxy is caching properly, that's where the chain often stops, which can be much faster than executing the actual query on the database. If the backend server is using HTTP caching effectively, it too can probably often respond with a simple 304 Not Modified, which is also very fast.
Load is distributed among more machines, which speeds up each individual machine. If your frontend servers can largely work with cached data without needing to bother the database, the database is much faster for the queries that it does need to handle. You can also scale up your frontend servers to many instances, each of which is faster because it has less load.
These are the advantages of such an architecture. The further advantage is that you can also directly expose your internal HTTP API to the outside world (again, reverse proxies make a lot of sense here). If you're not using an internal HTTP API, you have to write:
A controller and view which gets data from the model and renders to HTML.
A controller and view which gets data from the model and renders to JSON (or whatever).
To some degree controllers can be reused and simply the view switched out, if everything else is the same, but often you'll find that you need to duplicate each logical "thing" to be served as HTML in one case and JSON in another, and you need to keep those in sync.
I have added a chat capability to a site using jquery and PHP and it seems to generally work well, but I am worried about scalability. I wonder if anyone has some advice. The key area for me I think is efficiently managing awareness of who is onine.
detail:
I haven't implemented long-polling (yet) and I'm worried about the raw number of long-running processes in PHP (Apache) getting out of control.
My code runs a periodic jquery ajax poll (4secs), that first updates the db to say I am active and sets a timestamp.
Then there is a routine that checks the timestamp for all active users and sets those outside (10mins) to inactive.
This is fairly normal from my research so far. However, I am concenred that if I allow every active user to check every other active user and then everyone update the db to kick off inactive users, then I will get duplicated effort, record locks and unnecessary server load.
So I have implemented an idea of the role of a 'sweeper'. This is just one of the online users, who inherits the role of the person doing the cleanup. Everyone else just checks whether there is a 'sweeper' in existence (DB read) and carries on. If there is no sweeper when they check, they make themselves sweeper (DB write for their own record). If there are more than one, make yourself 'non-sweeper', sleep for a random period and check again.
My theory is that this way there is only one user regularly writing updates to several records on the relevant table and everyone else is either reading or just writing to their own record.
So it works OK, but the problem possibly is that the process requires a few DB reads and may actually be less efficient than just letting everyone do the cleanup as with other research as I mentioned.
I have had over 100 concurrent users running OK so far, but the client wants to scale up to several 100's, even over 1,000 and I have no idea of knowing at this stage whether this idea is good or not.
Does anyone know whether this is a good approach or not, whether it is scalable to hundreds of active users, or whether you can recommend a different approach?
AS an aside, long polling / comet for the actual chat messages seems simple and I have found a good resource for the code, but there are several blog comments that suggest it's dangerous with PHP and apache specifically. active threads etc. Impact minimsed with usleep and session_write_close.
Again does anyone have any practical experience of a PHP long polling set up for hundreds of active users, maybe you can put my mind at ease ! Do I really ahve to look to migrate this to node.js (no experience) ?
Thank you in advance
Tony
My advice would be to do this with meteor framework, which should be pretty trivial to do, even if you are not an expert, and then simply load such chat into your PHP website via iframe.
It will be scalable, won't consume much resources, and it will get only better in the future, I presume.
And it sure beats both PHP comet solutions and jquery & ajax timeout based calls to server.
I even believe you could find on github more or less a completed solution that just requires tweaking.
But of course, do read the docs before you implement it.
If you worry about security issues, read security with meteor
Long polling is indeed pretty disastrous for PHP. PHP is always runs with limited concurrent processes, and it will scale great as long as you optimize for handling each request as quickly as possible.
Long polling and similar solutions will quickly fill up your pipe.
It could be argued that PHP is simply not the right technology for this type of stuff, with the current tools out there. If you insist on using PHP you could try ReactPHP, which is a framework for PHP quite similar to how NodeJS is built. The implication with React is also that it's expected to run as a separate deamon, and not within a webserver such as apache. I have no experience on the stability of this, and how well it scales, so you will have to do the testing yourself.
NodeJS is not hard to get into, if you know javascript well. NodeJS + socket.io make it really easy to write the chat-server and client with websockets. This would be my recommendations. When I started with this is, I had something nice up and running within several hours.
If you want to keep your application stack using PHP, you want the chat application running in your actual web app (not an iframe) and your concerned about scaling your realtime infrastructure then I'd recommend you look at a hosted service for the realtime updates, such as Pusher who I work for. This way the hosted service handles the scaling of the realtime infrastructure for you and lets you concentrate on building your application functionality.
This way you only need to handle the chat message requests - sanitize/verify the content - and then push the information through Pusher to the 1000's of connected clients.
The quick start guide is available here:
http://pusher.com/docs/quickstart
I've a full list of hosted services on my realtime web tech guide.
Short task description: I want one signed in user to be able to send an instant short plain text message to another signed in user. The solution needs to be easily scalable and not too resource demandinng in terms of bandwidth and server load (and $$).
The first idea was do client polling but this idea was quickly abandoned since it didn't meet scalability requirement. So, after that I went into research and came accross a number of concepts including sockets, node.js, xmpp. The amount of information is a bit overwhelming, so I was hoping for some advice to point me in the right directions. Hopefully something with readily available hosting solutions.
#epascarello:
thanks for quick response. I did, but not in detail. Before going in-depth into any technology, I want to be know that this is actually what I need.
Most of the examples concetnrate on instant chat but my requirements are somewhat different. I don't need every signed in user to see a message, but only one particular user, for whom it was meant, while there can be, say, 100 000 users logged in...
#Saeed Neamati:
thanks! Yes, I pretty much understand the two client-server communication options and have come to the conclutions that the pulling is a no-go. What I am trying to find now is the most scalable (that's the main prerequisite) and (hopefully) easy to implement push option. For instance, the socket option is relatively easy but it seems like it's not going to scale well due to server overload (or am I wrong). The node.js (at least by concept description) should be better at that, but I wanted to get some confirmation to this assumption. With xmpp - I'm not even sure how relevant it is to my task and how to approach it.
#andyuk:
Andy thanks, yes socket.io is also something that I came accross while doing research. As far as I understand it requires a server module that needs to run on a host. Do you know if possible to run on any server or do I need to look for a specialized hosting company? THe socket.io site for some reason doesn't work on my PC (neither IE or FF).
Did you look at the source code of nodejs chat.?
Look, you only have two options for client-server communication. Either client starts a request (an HTTP request on the web), which is called pull model (like client pulls the request out of the server), and server responds to that, or server starts a response directly without receiving any request (an HTTP response on the web), which is called push model (like the server pushes the data out to the client).
What you described as polling is actually the pull model, and indeed it takes lots of resources from the server.
But on the other hand, when you want to use push model, your server should know the client. In other words, we know that HTTP (based on TCP/IP) is a stateless protocol, which means that after each request, your connection is closed, and server loses you and forgets about you.
If you want the server to know the client, you should keep the connection open. This is usually done via some HTTP headers like Keep-Alive and Connection.
But to do that you should read Comet Programming. However, this reduces your scalability, because more connections are kept open for a one-to-one map between connection and client (To understand this better, you can think of connections as doors of the server. The more you occupy the door as a client, the less other clients can use it).
Checkout socket.io. If web sockets isn't supported by the browser it will fallback to the next best transport technology.
There is even a chat example included in the source code.
As for your concerns about scalability, node.js is perfect for this due to it's event driven, non-blocking nature. Handling many open connections is one of Node's real strengths.
Plurk uses Node.js for their real time chat features and they support 100k+ users.
I'm trying to find a efficient way to watch the server log on a webpage, i don't mind building an app i just can't work out the best way to do it.
Is there a way to keep a stream open to a file with php and to the browser? or will it have to be done by polling the file every x seconds?
Thanks in advance,
Shadi
The best solution is definitely AJAX in some capacity. The only way to have the server "push" to you the way you describe (maintain an open stream) would require the HTTP connection to remain open which would ultimately trigger timeouts and consume a lot of resources. I would look into the Cometd library. The downside to this is that I believe it depends on Java although the site does mention perl, python and "other languages." In the worst case, you could use a specific jetty implementation just for log monitoring on a specific port. Regardless, that framework would most likely be your best bet.
Any web-based chat mechanism essentially uses a push architecture and would be good to look at for some inspiration. In this case, instead of users creating messages that are fired to other users, the server creates the events (when a log message is generated). Check out this article on Facebook chat for some insight into how they do it. Google chat might be worth looking into if you can find some stuff on the architecture.
For the actual logging, I'm not sure if you are in need of help for that, but log4php which is currently under incubation might be a good place to start as it provides you with a configuration that can simultaneously log to an arbitrary number of "loggers" like database, file, socket, etc. You could likely find one that would allow you to tie it into whatever push framework you elect to use.
Good luck!
Remember that the web model is essentially stateless (disconnected). Having that in mind when a client submits a request, the server processes the request and then send a response accordingly. You can have track of the clients action using cookies and/or sessions, but the resources reserved for a request are released after the response is submitted back.
I think that the best way to meet your goal, is to develop a web services that checks for the status of the log and fetch the diff (if any). Your app may consist of a web page with a div that will display the diff from the web service.
A script with a timer will trigger the call to the web service.
I will try to do something like this in a few weeks, and I will post the entire solution on moropo blog (spanish). You can ask for a post translation using the comments.
The best way to do it is to use AJAX to pull the file content every x seconds, giving the illusion of real time.
If you do want real time, you can use an XMPP server, but from what I can see, the first solution is far sufficient and does't require a lot of work.
Try wonlog.
https://www.npmjs.com/package/wonlog
You can stream multiple log files to a web browser.