Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 days ago.
Improve this question
Before going ahead with the problem just wanted to mention that as per standard given here https://stackoverflow.com/help/on-topic, this question comes under 4th category "a practical, answerable problem that is unique to software development". So I am requesting all please do not take this for solving something because this is design oriented question.
An API(3rd party) is returning 100 MB-500 MB data in JSON, but when I am trying to save that in some file or DB operation, usually it is getting timeout.
I know every time I can increase execution time or max timeout then it will fix my need.
But my main problem is to understand what is the perfect way to handle that much dataset, I just wanted to understand what should be good/perfect design/algorithm to work on that larger dataset?
Also I wanted to understand "Is it good way to design an API which is returning that much dataset like in GB"?
My main motto is to get feedback from some of good computer designer/scientists.
Also please help me to understand if there is other way to get all the data without increasing timeout, because Postman shows much-much bigger data returning from any API, so how does it works without increasing execution time, actually it loads some data and then it loads again other pending data as a lazy loader concept.
Thanks in advance.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 22 days ago.
Improve this question
I am building an application in PHP that requests data from a third-party API, stores and processes that data and then submits additional API requests based on the data received from the first request.
The issue is that there are several rate limits and where there is a large volume of data to be requested, I need to make many paginated API requests in 2-second intervals so that I can avoid getting blocked by the rate limits. Essentially, the programme keeps looping through making APi requests every 2 seconds until there is no longer a next page URl in the response header.
Depending on the amount of data, it could take several minutes, up to several hours. I can increase the max execution time in PHP.ini, but this is not efficient and could still result in a timeout if one day the program has too much data to work with.
I'm sure there must be a better way to manage this, possibly with serverless functions, or some kind of queuing system to run in the background. I have never worked with serverless functions, so it will be a learning curve, but happy to learn if needed.
I would love to hear what anyone thinks the best solution is. I am building the application in PHP, but I can work with JS, or NodeJs if I need to.
Many thanks in advance.
You can use queue for that. There are plenty of packages, and you can choose one depends on your needs.
Also, you can use Asynchronous requests maybe from guzzle or some other vendors (which speedup reading process) and also you can easily implement delay retry middleware for rate limiter.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
In my web app I have to show content depending on user location. As it is not possible to resolve current location in every request I can set it on URL parameter or Store it on session for further requests. But I am getting confused about which one will be faster? Pursing the location from url parameter? or reading the location from session in every request?
Getting an information from the URL itself will probably always be faster than sessions since it's available right away in the memory. How faster is it will depend on the storage method of your sessions. Sessions stored in an external database may take a few milliseconds to load, for example.
Testing it locally sequentially will probably yeld the same results for both methods. To get a reliable benchmark you'll need to test it concurrently with hundreds or thousands of requests per second.
Either way, you shouldn't worry about that kind of optimization, just choose the solution that will be easier to maintain. The URL has the added benefit of being scalable and stateless.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm writing my first AJAX driven website and some parts of it are kind of slow. For example, I'll click a button to change the page, and it might take three seconds for the new page to appear.
Chrome developer tools shows the network activity for the page being changed as follows:
DNS Lookup 1 ms
Connecting 45 ms
SSL 21 ms
Sending 0
Waiting 1.89 s
Receiving 73 ms
The size of the above request was 49.1 KB.
Clearly the "Waiting" time is where the slowdown is occurring. My question is, what is causing this "waiting" time? Is it something to do with the jQuery AJAX request, or is it because the MySQL database is slow, or something in a PHP file causing a delay?
Without seeing my project and debugging it first-hand, you may not be able to answer that question. If that's the case, how should I go about determining which part of the application is slowing things down?
Without seeing my project and debugging it first-hand, you may not be able to answer that question. If that's the case, how should I go about determining which part of the application is slowing things down?
That depends on your debug tools. At the most basic level, comment out parts of your server-side code and check how much the "waiting" time drops.
I don't know anything about profiling MySQL/PHP applications (in Django, you could use django-debug-toolbar), but Ajax queries are good candidates to cache in both DB and app output layers.
Consider using a cache system like memcached.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Sorry if this has been ask but i can't find anything about this on the form,
I am making a shipping calculator and i get csv files from my courier with rate and place (the calculator is made in php),my question is - what is best to read the CSV file in as an array or import the CSV to Mysql database and read the data that way?
If anyone has some experience with this type of situation and won't mine telling me the best way to go about this that will be so great.
I have not tried anything because i would like to know what the best way is to go about this.
Thanks for reading.
Won't this depend upon how many times a day you need to access the data, and how often the shipping data is updated?
eg if the shipping data is updated daily, and you access it 10000 times per day, then yes it would be worth importing it into a db so you can do your lookups.
(this is the kind of job sqlite was designed for btw).
If the shipping data is updated every minute, then you'd be best grabbing it every time.
If the shipping data is updated daily, and you only access it 10 times, then I wouldn't worry too much - either grab it an cache the file then access it as a PHP array.
Sorry, but I am not familiar with the data feed in question.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
When a visitor goes to a index.php file the following code is run. There will be only one match in the DB and then it gets it's role to include the correct page.
However, if nothing found it shows them a 404 page. I tested it and running, and I had in mind performance. In my opinion is better to have this instead of if mysql_num_rows > 0 and if { } else {}
You, with more experience than me, how do you find it ?
I gather the code above is already functioning as intended?
Contacting the database to get the user's role will take significantly longer than almost any PHP code you might write to analyze the result that you get back, so I wouldn't worry so much about the efficiency of the rest of your PHP here.
Only when you're doing a long-running calculation or writing a loop that runs many, many times should you worry about efficiency more than clarity.
If this code is run 50,000+ times a second, then maybe you're right to ask about efficiency, but if it's a few hundred thousand times a day or less, your server will never feel the difference, and your time as a coder is much more valuable than to be used thinking about this kind of optimization.