I need to do a quick server side test to check if an IP address is in or outside Australia. I don't want t go to the lengths of querying a remote server nor that of maintaining a global IP address allocation table in my database.
In the past I have been able to get country-by-country IP allocation ranges from apic.net but that URL does not seem to be valid anymore. I would be surprised if it is longer possible to do this. I would be much obliged to anyone who might be able to point me to the right location to get this information
There's a downloadable address to country database (in csv format) available here. They also offer several programs and libraries to integrate that db in your application here.
However, don't forget that these data are dynamic, and that you will have to update the database regularly:
If you use our IP to country Database in your applications you should get an updated copy from time to time. Depending on your application, probably once a month should be fine. We have observed up to 50 row changes from a single registry in a one day on busy days. Some days there are far fewer.
You can find country-by-country IP allocation ranges in this site :
http://www.nirsoft.net/countryip/
But this site may not be valid for ever, so try to reconsider using locally/remote database.
I found a downloadable country-wise listing here. As #Imane Fateh mentions the risk with relying on such resources is that they vanish one day. That is why I had wanted to find a replacement for the one from apic.net that I mention in my original question (which was no insurance though given that it too has vanished :-()
If you cannot implement your own ip geo-location, you can use this:
http://w3bdeveloper.com/api/geoIp/ip/24.24.24.24/key/hj2376bd2y8uhde27
Related
I've done some searches for this and can't seem to find anything on it. I'm looking for a starting point to create freemium model tools.
The languages that I'll be using are PHP, Ajax and MySQL.
Here's what I would like to get done.
Any random user can use the free tools on the site, but after X number of uses, they are asked to register an account, otherwise, they can't use the tool for another 24 hours.
From what I've seen from other tools, it seems to be done through IP tracking and storing them in a DB. But I can see this getting pretty messy after hitting the millions of results.
Can anyone with experience provide guidance on how I can start limiting the number of uses? I just have no idea where to start at this point.
if they don't register with a email first then the only solution i think is IP it doesnt have to get messy if you set it up right. you just a table with column for ip column for counter and column for date and time.
then when you insert the data the same time you run another query to delete data older than 24 hours. Some guys use IP combined with device info.
I am working on a project that synchronizes online and offline features due to the unstable Internet. I have come up with a possible solution. That is to create 2 similar databases for both online and offline and sync the two. My question is that is this a good method? Or are there better options?
I have researched online on the subject but I haven't come across anything substantive. One useful link I found was on database Replication. But I want the offline version to detect Internet presence and sync accordingly.
Pls can you help me find solutions or clues to solve my problem?
I'd suggest you have an online storage for syncing and a local database(browser indexeddb, program sqllite or something similar) and log all your changes in your local database but have a record with what data was entered after last sync.
When you have a connection you sync all new data with the online storage at set intervals(like once every 5 mins or constant stream if you have the bandwidth/cpu capacity)
When the user logs in from a "fresh" location the online database pushes all data to the client who fills the local database with the data and then it resumes normal syncing function.
Plan A: Primary-Primary replication (formerly called Master-Master). You do need to be careful PRIMARY KEYs and UNIQUE keys. While the "other" machine is offline, you could write conflicting values to a table. Later, when they try to sync up, replication will freeze, requiring manual intervention. (Not a pretty sight.)
Plan B: Write changes to some storage other than the db. This suffers the same drawbacks as Plan A, plus there is a bunch of coding on your part to implement it.
Plan C: Galera cluster with 3 nodes. When all 3 nodes are up, all can take writes. If one node goes down, or network problems make it seem offline to the other two, it will automatically become read-only. After things get fixed, the sync is done automatically.
Plan D: Only write to a reliable Primary; let the other be a readonly Replica. (But this violates your requirement about an "unstable Internet".)
None of these perfectly fits the requirements. Plan A seems to be the only one that has a chance. Let's look at that.
If you have any UNIQUE key in any table and you might insert new rows into it, the problem exists. Even something as innocuous as a 'normalization table' wherein you insert a name and get back an id for use in other tables has the problem. You might do that on both servers with the same name and get different ids. Now you have a mess that is virtually impossible to fix.
Not sure if its outside the scope of the project but you can try these:
https://pouchdb.com/
https://couchdb.apache.org/
" PouchDB is an open-source JavaScript database inspired by Apache CouchDB that is designed to run well within the browser.
PouchDB was created to help web developers build applications that work as well offline as they do online.
It enables applications to store data locally while offline, then synchronize it with CouchDB and compatible servers when the application is back online, keeping the user's data in sync no matter where they next login. "
I am currently thinking up a system to allow for online voting system for my old high school (a mock award ceremony really). Due to a restrictive school board I can guarentee that MySQL will not be an option to store votes. I am also under the assumption that should votes be stored in local files, data will overwrite when the file is called multiple times at the same time (which is a large possibility).
Does anyone have any suggestions as to how I might go about this? Perferably a PHP based solution as for the school board's restrictions. Please note the data will only need to be accessible for a few hours on a continuously running web server, so if the data is RAM-like (for a lack of a better term) that would be fine.
While I am tempted to reject the premiss of the question like some commenters have, here's an answer (I'm shamelessly trying to earn 200 reputation to try to help get a new site launched):
Write a recordVote function that stores each vote in its own file in a directory using a unique id in the file name (PHP doesn't have one guaranteed to yield truly unique GUIDs on all platforms, so use something like https://gist.github.com/dahnielson/508447).
When the polls close, run a tallyVote routine to compile the count of votes by reading all files in the directory.
I want to put together a PHP script to resolve city name (nothing else is needed) with a good resolution just for a single country (IRAN). As I have to query the DB for multiple times, its better to go through a downloadable local version.
I have read most of posts on stackoverflow and since now I have tested these:
GeoIP City from maxmind sounds good, but is not free.
GeoIP from maxmind, has a low level of accuracy (about 50-60%)
ip2country.net has an IP-2-City Database but not free and does not resolve city names for Iran.
I also tried the DB#.Lite from ipinfodb.com which has an API here without any success. The problem is that, it does not detect many city names.
I also tried hostip.info API, but it seems to be too slow.
There is a free php class with local DB which resolves only the country name.
I dont know if there is chance, using Piwik with this GeoIP plugin. It would be appreciated to have ideas if someone knows about it.
ipinfo.io is another service which does not resolve city names with accuracy.
I dont know if there is a way to use Google analytics to resolve city names, as I think google would be better than any other service regarding countries like Iran.
Any good idea would be really appreciated.
This is a tough one and hard to do reliably. I have given it a go in the past and it went something like this
Obtain a database of IP addresses, plus cities and countries (http://lite.ip2location.com/database-ip-country-region-city OR http://ipinfodb.com/ip_database.php)
Get the IP address and query for it against those tables to find the city and country
Finally check if its in Iran using the country column
There are paid for services that can do this really quickly for you. It might take you ages to get something working that is still unreliable because you simply do not have the data. I would seriously consider http://www.maxmind.com/en/city_per - unless of course this is a completely none commercial project and $ is a no no.
If you can get the lat and long from an IP table, even without the city data then you may want to then use something like this to check for the nearest city of Javascript is an optio n - Finding nearest listed (array?) city from known location.
What about the browsers Share Location feature?
If a browser-based solution works for your use case, you might want to look at MaxMind's GeoIP2 JavaScript API. It first attempts to locate the user using HTML5 geolocation and if that fails or is inaccurate, it reverts to MaxMind's GeoIP2 City data (not GeoLite). MaxMind provides a free attribution version.
Sometimes we need to use local Geo-IP database instead of web services for particular purposes. This is my experience:
I downloaded database form https://db-ip.com. I was searching a lot and finally I found this one more reliable. but still two more problem:
1-The "IP address to city" database is too huge to upload on MYSQL database on hosting as it pass time out limit.
2-The Database is base on IP address compare of http://lite.ip2location.com Database is base of IP Numbers.
So I developed a simple .NET app to solve those problems.
The solution can be downloaded from here: https://github.com/atoosi/IP2Location-Database-Luncher
I'm developing a website that is sensitive to page visits. For instance it has sections that will show the users which parts of the website (which items) have been visited the most. To implement this features, two strategies come to my mind:
Create a page hit counter, sort the pages by the number of visits and pick the highest ones.
Create a Google Analytics account and use its info.
If the first strategy has been chosen, I would need a very fast and accurate hit counter with the ability to distinguish the unique IPs (or users). I believe that using MySQL wouldn't be a good choice, since a lot of page visits, means a lot of DB locks and performance problems. I think a fast logging class would be a good one.
The second option seems very interesting when all the problems of the first one emerge but I don't know if there is a way (like an API) for Google Analytics to make me able to access the information I want. And if there is, is it fast enough?
Which approach (or even an alternative approach) you suggest I should take? Which one is faster? The performance is my top priority. Thanks.
UPDATE:
Thank you. It's interesting to see different answers. These answers reminded me an important factor. My website updates the "most visited" items, every 8 minutes so I don't need the data in real time but I need it to be accurate enoughe every 8 minutes or so. What I had in mind was this:
Log every page visit to a simple text log file
Send a cookie to the user to separate unique users
Every 8 minutes, load the log file, collect the info and update the MySQL tables.
That said, I wouldn't want to reinvent the wheel. If a 3rd party service can meet my requirements, I would be happy to use it.
Given you are planning to use the page hit data to determine what data to display on your site, I'd suggest logging the page hit info yourself. You don't want to be reliant upon some 3rd party service that you'd have to interrogate in order to create your page. This is especially true if you are loading that data real time as you'd have to interrogate that service for every incoming request to your site.
I'd be inclined to save the data yourself in a database. If you're really concerned about the performance of the inserts, then you could investigate intercepting requests (I'm not sure how you go about this in PHP, but I'm assuming it's possible.) and then passing the request data of to a separate thread to store the request info. By having a separate thread handle the logging, then you won't interrupt your response to the end user.
Also, given you are planning using the data collected to "... show the users which parts of the website (which items) have been visited the most", then you'll need to think about accessing this data to build your dynamic page. Maybe it'd be good to store a consolidated count for each resource. For example, rather than having 30000 rows showing that index.php was requested, maybe have one row showing index.php was requested 30000 times. This would certainly be quicker to reference than having to perform queries on what could become quite a large table.
Google Analytics has a latency to it and it samples some of the data returned to the API so that's out.
You could try the API from Clicky. Bear in mind that:
Free accounts are limited to the last 30 days of history, and 100 results per request.
There are many examples of hit counters out there, but it sounds like you didn't find one that met your needs.
I'm assuming you don't need real-time data. If that's the case, I'd probably just read the data out of the web server log files.
Your web server can distinguish IP addresses. There's no fully reliable way to distinguish users. I live in a university town; half the dormitory students have the same university IP address. I think Google Analytics relies on cookies to identify users, but shared computers makes that somewhat less than 100% reliable. (But that might not be a big deal.)
"Visited the most" is also a little fuzzy. The easy way out is to count every hit on a particular page as a visit. But a "visit" of 300 milliseconds is of questionable worth. (Probably realized they clicked the wrong link, and hit the "back" button before the page rendered.)
Unless there are requirements I don't know about, I'd probably start by using awk to extract timestamp, ip address, and page name into a CSV file, then load the CSV file into a database.