I want to load a page from a different source like how the online flight ticket reservation sites are working. Meaning the data needs to be pulled from different systems and show it up in a single page.
I can do this by creating one file which can collects the data from different systems and merge it by the required sort order and show in the page. But if any one of the source systems works slow then the entire page needs to wait till all the results are received from various sources.
The Question is
Is it possible to show the content which is retrieved from various sources without any middle layer to manipulate data before display? Meaning the page will show the content when it receives from either of the sites at first and the page will reorder when it receives the content from other sites.
Advance thanks for your help.
The way i solved this issue by doing the below.
Created one aggregate layer, which makes the request (curl) to different systems (asynchronous)
Upon receiving response from one system (whichever comes first) have stores it in cache (memcache) and display the result in the page
Then when the response comes form the other system, aggregate the result with the previous results which is stores in cache and refresh the page again with the aggregated data
I know this is not a good solution, but since i don't have any better option and right now handling it like this.
Related
I am working on an application using flutter and Mysql DB with PHP Laravel , I have the same post on two different pages (posts come from view in DB).
when I delete or add or edit (do CRUD functionality) that post on the first page the effect does not reflect on the other page in the application (inconsistency), so I must refresh the other page (call API) to show the effect in it.
I try to get all lists that contain posts and search on them to see if a post exists there and do they reflect on it, but there are 3 or 4 lists I need to check every time and this makes the process complex on RAM as a phone application. I also search for something like ajax on the web but I found nothing.
what I need is when I delete or edit the post on the first page this post is also deleted or edited on the second page without doing a refresh.
I am a beginner programmer/coder who is currently trying to get to grips with HTML and PHP. I currently have a locally hosted searchable database that (when used) brings up a list of the first twenty entries that correspond to your search terms, with buttons to send you to the next page, last page, etc. (you know... pages...) The search outputs information into the URL ([url]?q=Alphonse&f=Elric etc). I have two problems with this at the moment.
Problem A:
My URL contains information that is unused. If I don't put anything into the search term it simply comes out with "q=&x=&f=..." etc. This makes the URL absurdly long even on the most simple searches.
Can this be cleaned up through just php?
Would this method be different if I end up hosting this online?
Problem B:
The way my paging functions is to send the user to the following link '.$_SERVER['PHP_SELF'].'?'.$_SERVER['QUERY_STRING'].'&pn='.$nextPage.'. This outputs the current link but with "pn=1" at the end (Or whatever relevant page they click).
This method itself makes the URL quite messy. If they click through multiple pages, and perhaps go back and forth, the link ends up having "pn=1&pn=2&pn=3&pn=1...." etc at the end. I assume that this will be answered by the first query, but it is slightly different in that this is information that is actually present.
How do I remove this superfluous information, and just keep the (final) relevant one?
I am thinking that I can use parse_str to turn the URL into an array, then delete each entry of the array that are empty, then create a new string out of that array and make that the link the search/next page button goes to.
Does that sound like it would work? If so, how do I delete those specific array entries, and how would that array then be stored? Would the array lose those entries and calling a deleted entry "$array['1']" for example result in an error, or does deleting entries in an array move everything up one to fill the gap?
Apologies if I'm asking too many different things in one post here!
I'm building a web app using Yii2 (php, mysql). Users can click on others' game results to see what item the user used for this result. There can be from 100 to 1000 results displayed on a single page. I don't know which option is the best in terms of speed for the page and the server :
1- On page load, a modal is loaded for every result and displayed when users click on a result. This way, there can be from 100 to 1000 modal loaded on the page. Is this too heavy considering that only a few of them will be used? Or even none of them sometimes.
2 - Load only one modal that is brought up when users click on any results and dynamically adjust his content using an ajax request to the server depending on which results was clicked. This way, less code loaded on the page but more requests to the server.
First option is easier to code but I think the second one might be better for page load. I'm far from expert in terms of page size and server requests handling, so I'd like to get some opinions.
I think you've answered the question yourself.
The second option is right. Only one modal (per type of action -- edit, view, etc) should exist. Then use ajax to load data only when requested.
Your users will thank you. The page size and load times will be significantly better.
I have a database of different stores.
When a user clicks on a store name, I want an Ajax function to run displaying information about the store in a different div.
Information categories for all stores wll be the same: products carried, location, general information, etc.
I could easily make it so that each different store uses a different file name as an argument to the ajax function, and all files would have the same layout/format but with different data.
However i feel like this is bad form. How can i make it so that i have one fixed template and all that changes is the information specifics imputed into the template?
Please note that the store information display pages will also need to be able to have clickable links of their own (i.e. click on location and a google map pops up).
Is it something to do with XML? I dont know much about it.
Instead of returning a template, return the data.
So it says getstore.php?id=2 which returns a json string
{"name":"my store", "info" :"blah"}
Then you use java script to insert a new div, populated with that data.
I maintain a hobby website that, among other things, chronicles whether certain items are in print or out of print at a particular web store.
The store's management removes products when they are out of stock, and re-adds the pages when they're back in stock.
Scraping the category page's item list for item titles is easy enough, but I'm not sure what to do about pages with more results than are shown.
The pages default to 10 items, and clicking Next loads up the next 10 via AJAX.
Is there a standard way of handling and scraping such setups?
If you use the developer feature of your web browser (Firebug, Inspector, Developer Tools, ...) you should able to see the connections being made to retrieve the data through Ajax and the request and response headers being sent and received.
The request headers will contain the data being sent as well as the URL that's been request. The query string of the URL or the POST data would most likely contain a "start" or "next" or some time of parameter that identifies the start and number of results to return.
You can then use PHP and cURL to automate the rest of the process.
Here's a screenshot of what the "Web Inspector" looks like in Safari 5.1 on OS X (Chrome looks identical):
What's relevant to you here is the Request URL, Request Method and what's under Form Data. The text on the left (in light grey) is the parameter and the text on the right is the value.