Info about the website I'm working:
Website related to soccer live score
API provides real-time data for soccer score (they got no webhooks)
What I want:
Want to deliver the real-time score in frontend
Also, I want to save that data in Redis temporarily and after the match finishes, want to push from Redis to the database.
Preferably don't use any external JS libraries ( http://socket.io ), pusher, etc. Laravel Broadcasting+ Redis is my preferred way since I won't need pusher or socket js code to load.
Parts of this problem:
Part 1: Pulling external API data to database(or Redis).
--> So far, the only way I've managed to pull data from the API to the database is, I've created a route which will trigger the load data from external API. Again, this is so useless as of now, because live score data in API is updated almost every second, and so far I need to trigger the route(or refresh the URL every second) just to pull up data from API. Also, not to forget, it will take 2-3 minimum second just to completely transfer API data to the database. This section is not dependant on whether to pull only if the user(frontend is requesting). It should do its job even if there are 0 users online.
So, my question is what is the best, most efficient and complete way to pull API data real-time and save it in Redis until the match is finished? (we can know the status of the match by checking in API data example: {match_id{status: finished}}xxxx). Because after the match is finished, I will push Redis to the database.
Part 2: Publishing that data real-time from the database(or Redis).
-> Okay this one for me is fairly easier than part 1, I've already found ways to publish Redis data real-time via pusher and socket.io. But other than that, what is the best way to do in my scenario? Also, do I need any JS libraries if I have to use a combination of Redis+ Laravel broadcasting?
Thank you for your suggestion!
Possible answer for part 1:
I would use Task Scheduling to ->daily(); or ->hourly(); an Artisan Console Command to check when the next soccer match is and write a record in the matches table, or update existing records in case the start/end time changes.
Another Console Command on ->cron('* * * * *'); (change to desired seconds) that executes every few seconds can check the matches table; if the current time is between the starts_at and ends_at of a match, retrieve realtime data.
To prevent multiple executions of the command (if for some reason an API call takes a bit longer) at the same time, Symfony's Lockable Trait might be used.
Related
EDIT:
the server application is a pricer, with
a main calculatePrice function that needs to execute various timeconsuming tasks (get market data from external sources, do some calculations, etc)
when the application starts it will indefinitely run the pricing function (calculatePrice inside an infinite while loop with maybe a 10sec wait between each iteration so the next one will request updated market data from external sources)
each calculation result should be persisted (timestamp, stock, price), in a cache or db or published to a data bus (whatever the method it will be stored somewhere)
--> so this application has been started and is running on its own indefinetely calculating stock prices and persisting them
now comes along my html client with a simple get stock price button.
-the get stock function will send an ajax request to a server php script requestPrice.php
then the requestPrice php script would ask the infamous application for it's latest calculation
finally the requestPrice php ajax returns a json array with stamp and price, and in the client html you could have a div that displays "latest price XX.YY calculated at HH:MM
so for me the idea is to create this independant continuously running pricing application.
found a lot of close answers but nothing corresponding exactly to what i'm looking for, all of them are calling server side scripts instead of having an independent application running on its own
I hope I understood your problem, but I'm not completely sure about it..
Anyway, I think you might need to setup an entry in the system crontab that executes your script automatically at predefined intervals.
Then you can perform your "polling" to the server or whatever call you need to your files.
If you need some more reactive "real-time" technology you could rely on websockets, tho. There's a cool library for php called Ratchet
I am wondering how to auto-save API data from http://api.bitcoinaverage.com/ticker/USD and https://crypto-trade.com/api/1/ticker/dvc_btc to create a chart using jqplot.
How can I make automated calls to each of the sites every 10 minutes and save data, and not have the data be overwritten by future calls?
Something like the chart here: vircurex.com/
You will need to use a combination of Database , Cache and a Cron (Scheduled Job) on your server to achieve what you want to do.
A high level approach would be:
1) Run a Backend Cron Job every 10 minutes. This will make a call to your Data Source i.e. HTTP Services and save the data in the database.
2) When the front-end makes a call, you should check if the results are present in the Cache. If they are present, return from the Cache itself so that Database calls (expensive) are avoided. If the results are not present in the Cache, retrieve from Database, put in Cache and return the data.
You might want to design your Cache and the items as per your requirements. Depending on the Caching libraries that you use, you could look for features like auto-expiring the cache items, reloading them automatically, etc.
I'm using PHP to work with an API that limits me to 1 request every 10 seconds. The most results it can return in one go is 10.
That means to get 1000 results I need to make 100 calls which if done one after the other would take about 17mins so doing it 'on the fly' is not really an option.
If I need to get 1000 rows from the API, which would be the best way to go about it?
Is there any way I could get the API data 'in the background' so that when I need the info it's already in my database? The API is only updated every 4 weeks so it would only need to synchronise once in that time period.
I've though about using a cron job to do this but i'm not sure how it would work considering how long the script would have to run for.
Using a cron job is a great way to store values from an API in a database. You need to use updated time fields for both the API and the local database. Check if the date of the last record update of the API is greater than the date of the last record update in your local database. If so, fetch the new data.
If you want to do this same process for the local database to the server, you can. This is a common operation to sync project flow and reduce API requests.
I am trying to create a PBBG (persistent browser based game) like that of OGame, Space4k, and others.
My problem is with the always-updating resource collection and with the building times, as in a time is set when the building, ship, research, and etc completes building and updates the user's profile even if the user is offline. What and/or where should I learn to make this? Should it be a constantly running script in the background
Note that I wish to only use PHP, HTML, CSS, JavaScript, and Mysql but will learn something new if needed.
Cron jobs or the Windows equivalent seem to be the way, but it doesn't seem right or best to me.c
Do you have to query your db for many users properties, like "show me all users who already have a ship of the galaxy class"?
If you do not need this you could just check the build queue if someone requests the profile.
If this is not an option you could add an "finished_at" column to you database and include "WHERE finished_at>= SYSDATE()" in your query. In that case all resources (future and present) are in the same table.
Always keep in mind: what use is there to having "live" data if no one is requesting it?
My problem is with the always-updating
resource collection and with the
building times, as in a time is set
when the building, ship, research, and
etc completes building and updates the
user's profile even if the user is
offline
I think the best way to do this is to install message queue(But you need to be have install/compile it) like beanstalkd to do offline processing. Let's say it takes 30 seconds to build a ship. With pheanstalk client(I like pheanstalk) you first put message to the queue using:
$pheanstalk->put($data, $pri, $delay, $ttr);
You could see protocol for meaning of all arguments.
But with $delay=30. When a worker process does a reserve() it can process the message after 30 seconds.
$job = $pheanstalk->reserve();
Streaming data to user in real-time
Also you could look into XMPP over BOSH to stream the new data to all users in real-time.
http://www.ibm.com/developerworks/xml/tutorials/x-realtimeXMPPtut/index.html
http://abhinavsingh.com/blog/2010/08/php-code-setup-and-demo-of-jaxl-boshchat-application/
I have a website that uses MySQL. I am using a table named "People" that each row represents, obviously, a person. When a user enters a page I would like to introduce news related to that person (along with the information from the MySQL table). For that purpose, I decided to use BING News Source API.
The problem with the method of calling the BING API for each page load is that I am increasing the load time of my page (round tip to BING servers). Therefore, I have decided to pre-fetch all the news and save them in my table under a coloumn named "News".
Since my table contains 5,000+ people, running a PHP script to download all news for every person and update the table at once results a Fatal error: Maximum execution time (I would not like to disable the timeout, since it is a good security measure).
What will be a good and efficient way to run such a script? I know I can run a cron job every 5 minutes that will update only a portion of rows everytime - but even in that case - what will be the best way to save the current offset? Should i save the offset in MySQL, or as a server var?
use cronjob for complex job
you should increase the timeout if you plan to run as cronjob (you are pulling things from other site, not for public)
consider create a master script (triggered by the cronjob) and this master script will spawn multiple sub-scripts (with certain control), so that you can pull the data from BING News Source (with this you can multi download the 5000+ profiles) without have to download one-by-one sequentially (think batch processing)
Update
Cron is a time-based job scheduler in Unix-like computer operating systems. The name cron comes from the word "chronos", Greek for "time". Cron enables users to schedule jobs (commands or shell scripts) to run periodically at certain times or dates. It is commonly used to automate system maintenance or administration, though its general-purpose nature means that it can be used for other purposes, such as connecting to the Internet and downloading email
Cron - on Wiki
Why not load the news section of the page via AJAX? This would mean that the rest of the page would load quickly, and the delay created from waiting for BING would only affect the news section, which you could allocate a loading placeholder to.
Storing the news in the DB doesnt sound like as very efficient/practical solution, the ongoing management of the records alone would potentially cause a headache in future.