I have a dilemma that I need to figure out.
So I am building a website, where people can go watch a competitive game (Such as Counter Strike: Global Offensive), perhaps using either a Twitch TV stream, or actually through the matchmaking streaming services that the game may offer (In the case of this example, CS: GO TV). While playing, members can place "bets" on which teams will win, using some form of credits with no real value. Of course, the issue here, is that the site will need to be able to pull the score from the game, and update in real time. So sticking with the example of CS:GO, is there a portion of the Steamworks API, that would allow for real-time pulling of a game's score, through some kind of PHP or JavaScript method?
I'm sorry to tell you that you can't, for now.
In the API description of the CS:GO Competitive Match Information says:
It would be interesting to be able to find out competitive match information -- exactly like what DOTA 2 has. It could contain all the players in the map, with their steamids and competitive ranks, the score at half time/full time. There are probably a few more bits of info that could also be included. Pigophone2 16:54, 14 September 2013 (PDT)
To answer your question, there is no Steam developed API that does this.
However many websites still do exactly what you are looking for.
My guess is that they use a regularly updated script which parses websites like ESEA and ESL and pull data about those matches. After all, they are the ones who host almost all big games that people care about.
You'll need to keep up-to-date with private leagues though, as they don't typically publish live stats in an easily parse-able format. GOSU Gamers can help you track any new players that come to the big-league table.
Related
My application will allow users to like or dislike a product and leave a short feedback. I have to make a functionality which will show graph and produce report based on different time frame, probably it will be yearly, monthly, weekly and daily basis.
I have to show how many users liked or disliked the product on a particular time duration via a chart and generate the report. So my application should able to produce the daily graph of August 2018 or monthly graph of year 2018 of a particular product. The graph should able to reveal how many users liked or disliked the product on daily basis if it is daily graph, Similarly it may be for weekly, monthly or yearly time frame
I am not sure what should be the database structure for this type of application? Here what I have thought so far.
products: id, name, descp...etc // products table
users: id, name, email ...etc // users table
user_reactions: id, user_id(foreign key), product_id(foreign key), action(liked or disliked, tinyint), feedback // user_reactions table
data: id, product_id(foreign key), date(Y-m-d), total_like, total_dislike. // data table, will be used to make graph and report
What, I am thinking is that, I will run a cron job on 23:59:59 every day to count the like and dislike of each product and will add the data in last table, i.e. data table as mentioned above and then will use this data table to make graph and report. I am not sure if this database structure is correct or it have some unseen problem (may be in future?)
Note: My Application will be in PHP and MySQL
Well, There is no right answer to your question. Because an answer to your question is called an opinion based answer. You and I will get enough downvotes for sure . But still, hear me out my friend because I was in your state once.
There is a quote by famous professor Mr. Donald Knuth
Premature optimization is the root of all evil
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
The idea is that you have to start building. as your application progress, you will face troubles, you will face problems with your database, your system might not scale or it can't handle a million requests. But until you hit that problem you don't have to worry about it.
I am not saying that you should go and build a system blindly with an infinite loop, or create a table join which can cause deadlocks. I hope you get my point.
Build a system with your knowledge and understanding. Because there is no one straight way to a problem. build a feature -> you hit an issue -> tweak your app -> and rinse and repeat. One day your own experiences will show you the right path.
From your given description I can't figure out exactly how it will come out, but I am sure it will suffice for your initial days. As you progress, you might find it hard to add new features or add additional constraints, But its another day. wait for it and ask another question.
I hope I have answered your question.
I was thinking about an idea of auto generated answers, well the answer would actually be a url instead of an actual answer, but that's not the point.
The idea is this:
On our app we've got a reporting module which basically show's page views, clicks, conversions, details about visitors like where they're from, - pretty much a similar thing to Google Analytics, but way more simplified.
And now I was thinking instead of making users select stuff like countries, traffic sources and etc from dropdown menu's (these features would be available as well) it would be pretty cool to allow them to type in questions which would result in a link to their expected part of the report. An example:
How many conversions I had from Japan on variant (one page can have many variants) 3.
would result in:
/campaign/report/filter/campaign/(current campaign id they're on)/country/Japan/variant/3/
It doesn't seem too hard to do it myself, but it's just that it would take quite a while to make it accurate enough.
I've tried google'ing but had no luck to find an existing script, so maybe you guys know anything alike to my idea that's open source and well reliable/flexible enough to suit my needs.
Thanks!
You are talking about natural language processing - an artificial intelligence topic. This can never be perfect, and eventually boils down to the system only responding to a finite number of permutations of one question.
That said, if that is fine with you - then you simply need to identify "tokens". For example,
how many - evaluate to count
conversations - evaluate to all "conversations"
from - apply a filter...
japan - ...using japan
etc.
I have a unique problem, I need to pull specific attributes for every game that is being played every 5 minutes, the two main issues I have are:
Phrasing data from a website that displays it interactively i.e. MLB.com, ESPN, CBS Sports.
Finding a source that would perhaps show the box scores that are updated live and in a text format.
I have done significant Googling as well as looking at possible solutions for scraping data off of MLB and CBS Sports. I havn't had such luck, it's a bit difficult right now because I don't have any fresh data to play with however I've been looking for possible solutions and havn't came to any resolution.
To my knowledge there isn't an open database that I can query that contains live updates scores otherwise I could piggyback off of that or obtain a similar system.
check out this forum question on another site. Looks like there are a few out there that will allow you to get csv's of their data. Not sure how much of it could be automated.
http://ask.metafilter.com/120399/MLB-API
Another is http://www.baseball-reference.com/ I'm not sure if they do box scores but they have stats on all the players, games, etc. They might have something you can use as well.
Finally you could check out http://www.strat-o-matic.com/ they might have something or be willing to create an API for you.
If you notice on Yahoo, they get their stats from STATS LLC. I have no idea what it costs, but you should check-out their real-time data delivery service.
Scrape the MLB gameday server. It is updated in realtime during games. If you want the boxscore, scrape boxscore.xml (for example)
I've implemented a voting system for videos online, wherein visitors can only cast a vote once in any given day. I use combination of their email address and timestamp to ensure that each vote is unique for that day.
As you might guess, this lead to people gaming the system by registering throwaway email addresses at mailinator.com and the like, so I'm wondering if anyone's tried implementing any other voting algorithms that allow for multiple votes by the same person. In addition, this setup means that if a video #1 has more people associated with it than video #2, video #1 is already at an unfair advantage.
I'm thinking about a ranked system, but I'm not totally sure how that could prevent anyone from gaming the system with fake email addresses. The problem I'm trying to solve is like this:
Given 3 videos, A, B and X. A has 5 people in it, B has 2, and X has 4.
Assuming that X is the best video of the three, and that people can vote every day, is there a voting system that will help "B" rise to the top?
Like I said, I my proposed ranked system, would posit that if the amount of #2's outnumber the #1's, it's safe to assume that should be the winner, but that seems incomplete.
Has anyone tackled anything like this before? Keep in mind, these are pretty low volume results (we average about 500 votes/7 days), so 2 people can really make a difference.
This is on a LAMP (PHP) stack in a shared hosting environment, if it helps.
Also, if you're wondering why we're allowing multiple votes by the same person, it's because the higher ups realize this helps drive traffic to the site, and they really enjoy seeing graphs go up (despite the fact the subsequent hits are pretty meaningless).
Thanks in advance, and if you need any other information please let me know.
You're actually asking about two separate things:
First, how can you prevent people gaming the system? This is pretty intractable. You can raise the bar for placing a vote, by requiring registration, a minimum reputation like SO, or other restrictions, but ultimately all you can hope to do is reduce cheating, not eliminate it. Consider that people successfully register multiple times for physical political elections, then evaluate how likely it is that you can eliminate all cheating on your site.
Second, how do you give a fair quality ranking to different items that may have different popularity and have been around for different times? One very good solution is described here by Randall Munroe. That article links to the actual algorithm, which is fairly straightforward to implement.
There is no solution to your problem without a login system. People will keep defeating your system unless you provide them with a real authentication system which takes several steps to create an account. OpenID is great for this by the way.
Do not use heavy cookie based stuff (especially do not use Evercookie). This is an offense to your users' privacy. I would never want a zombie cookie on my computer.
If they keep gaming you, there is nothing you can do, except manually flagging garbage accounts and deleting the corresponding votes.
Or you can do a reputation based system, with a minimal rep needed to vote (like StackOverflow).
Look at OpenID if you want a fast secure working solution.
There is this Q&A platform on the net -- don't know if you ever heard of that -- it's called stackoverflow.com ;-)
Maybe you can adopt the rating system at this site? I find it quite clever to allow only users with a given rating to manipulate the system in several ways. You can select users by age of their account (e.g. votes count only from 2 weeks after registration) or by some kind of reputation system.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am planning on creating a small website for my personal book collection. To automate the process a little bit, I would like to create the following functionality:
The website will ask me for the ISBN number of the book and will then automatically fetch the title and add it to my database.
Although I am mainly interested in doing this in php, I also have some Java implementation ideas for this. I believe it could also help if the answer was as much language-agnostic as possible.
This is the LibraryThing founder. We have nothing to offer here, so I hope my comments will not seem self-serving.
First, the comment about Amazon, ASINs and ISBN numbers is wrong in a number of ways. In almost every circumstance where a book has an ISBN, the ASIN and the ISBN are the same. ISBNs are not now 13 digits. Rather, ISBNs can be either 10 or 13. Ten-digit ISBNs can be expressed as 13-digit ones starting with 978, which means every ISBN currently in existence has both a 10- and a 13-digit form. There are all sorts of libraries available for converting between ISBN10 and ISBN13. Basically, you add 978 to the front and recalculate the checksum digit at the end.
ISBN13 was invented because publishers were running out of ISBNs. In the near future, when 979-based ISBN13s start being used, they will not have an ISBN10 equivalent. To my knowledge, there are no published books with 979-based ISBNs, but they are coming soon. Anyway, the long and short of it is that Amazon uses the ISBN10 form for all 978 ISBN10s. In any case, whether or not Amazon uses ten or thirteen-digit ASINs, you can search Amazon by either just fine.
Personally, I wouldn't put ISBN DB at the top of your list. ISBN DB mines from a number of sources, but it's not as comprehensive as Amazon or Google. Rather, I'd look into Amazon—including the various international Amazons—and then the new Google Book Data API and, after that, the OpenLibrary API. For non-English books, there are other options, like Ozone for Russian books.
If you care about the highest-quality data, or if you have any books published before about 1970, you will want to look into data from libraries, available by Z39.50 protocol and usually in MARC format, or, with a few libraries in Dublin Core, using the SRU/SRW protocol. MARC format is, to a modern programmer, pretty strange stuff. But, once you get it, it's also better data and includes useful fields like the LCCN, DDC, LCC, and LCSH.
LibraryThing runs off a homemade Python library that queries some 680 libraries and converts the many flavors of MARC into Amazon-compatible XML, with extras. We are currently reluctant to release the code, but maybe releasing a service soon.
Google has it's own API for Google Books that let's you query the Google Book database easily. The protocol is JSON based and you can view the technical information about it here.
You essentially just have to request the following URL :
https://www.googleapis.com/books/v1/volumes?q=isbn:YOUR_ISBN_HERE
This will return you the information about the book in a JSON format.
Check out ISBN DB API. It's a simple REST-based web service. Haven't tried it myself, but a friend has had successful experiences with it.
It'll give you book title, author information, and depending on the book, number of other details you can use.
Try https://gumroad.com/l/RKxO
I purchased this database about 3 weeks ago for a book citation app I'm making. I haven't had any quality problems and virtually any book I scanned was found. The only problem is that they provide the file in CSV and I had to convert 20 million lines which took me almost an hour! Also, the monthly updates are not delta and the entire database is sent which works for me but might be some work for others.
I haven't tried it, but take a look at isbndb
API Description: Introduction
ISBNdb.com's remote access application programming interface (API) is designed to allow other websites and standalone applications use the vast collection of data collected by ISBNdb.com since 2003. As of this writing, in July 2005, the data includes nearly 1,800,000 books; almost 3,000,000 million library records; close to a million subjects; hundreds of thousands of author and publisher records parsed out of library data; more than 10,000,000 records of actual and historic prices.
Some ideas of how the API can be used include:
- Cataloguing home book collections
- Building and verifying bookstores' inventories
- Empowering forums and online communities with more useful book references
- Automated cross-merchant price lookups over messaging devices or phones
Using the API you can look up information by keywords, by ISBN, by authors or publishers, etc. In most situations the API is fast enough to be used in interactive applications.
The data is heavily cross-linked -- starting at a book you can retrieve information about its authors, then other books of these authors, then their publishers, etc.
The API is primarily intended for use by programmers. The interface strives to be platform and programming language independent by employing open standard protocols and message formats.
Although the other answers are correct, this one explains the process in a little more detail. This one uses the GOOGLE BOOKS API.
https://giribhatnagar.wordpress.com/2015/07/12/search-for-books-by-their-isbn/
All you need to do is
1.Create an appropriate HTTP request
2.Send it and Receive the JSON object containing detail about the book
3.Extract the title from the received information
The response you get is in JSON. The code given on the above site is for NODE.js but I'm sure it won't be difficult to reproduce that in PHP(or any other language for that matter).
To obtain data for given ISBN number you need to interact with some online service like isbndb.
One of the best sources for bibliographic information is Amazon web service. It provides you with all bibliographic info + book cover.
You might want to look into LibraryThing, it has an API that would do what you want and they handle things like mapping multiple ISBNs for different editions of a single "work".
As an alternative to isbndb (which seems like the perfect answer) I had the impression that you could pass an ISBN into an Amazon product URL to go straight to the Amazon page for the book. While this doesn't programmatically return the book title, it might have been a useful extra feature in case you wanted to link to Amazon user reviews from your database.
However, this link appears to shows that I was wrong. Actually what Amazon uses is the ASIN number and while this used to be the same as 10-digit ISBN numbers, those are no longer the only kind - ISBNs now have 13 digits (though there is a straight conversion from the old 10-digit type).
But more usefully, the same link does talk about the Amazon API which can convert from ISBN to ASIN and is likely to also let you look up titles and other information. It is primarily aimed at Amazon affiliates, but no doubt it could do the job if for some reason isbndb does not.
Edit: Tim Spalding above points out a few practical facts about ISBNs - I was slightly too pessimistic in assuming that ASINs would not correspond any more.
You may also try this database: http://www.usabledatabases.com/database/books-isbn-covers/
It's got more books / ISBN than most web services you can currently find on the web. But it's probably an overkill for your small site.