Mysql distance to line - php

I try to create a system for detecting if a sporter is on the route that he has to do. Therefor I have a component where I can draw the route they should do. That route is stored in mysql, using LINE.
The next step I do is when I have a coordinate of the user, is finding if the distance between the user and the line. I use for that this query:
select *, st_distance(point(50,2),map_points) from tbl_route_new
This returns good information (I think), but this is how I see the information:
3.7770580579682638
Can someone tell me how I can know what the real distance is. I think this is in degrees. If I try some converters I find across the Internet I get a value, but that value is wrong unfortunatly.
If this isn't possible, then I will use PHP to do the calculations, but I think mysql is faster, then a loop over all the points.

Related

Matrix Distance Genetic Algorithm on Android

I am making a routing application on Android when user can input the amount of hours to travel a place and my application would give an output of possible route users can travel.
I am using genetic algorithm (GA) to give the route to the user, and I use PHP to execute my GA.
Here comes the problem, in order for routing to be effective, I need to know distance between each city to verified if the route is possible and the distance in minimized. How to store distance between each city in order to make the execution faster? I have tried to get the distance directly
from Google Maps API but it takes longer execution time.
I was thinking to store the distance to json file, but is it possible? Or is there another effective ways?
Note that the destination will be dynamic. Users can add a new destination, so whenever there is a new destination the matrix distance needs to be updated.
Please help me :) Thank you.
You know the initial position of the user and want to know different destination distances. I suggest you to use single source shortest path deterministic algorithm like Dijkstra instead of evolutionary algorithm. The implementation based on a min-priority queue implemented by a Fibonacci heap running in O(E.logV) where E is the number of edges and V is the number of vertices. It runs much faster than genetic algorithm and also find the best answer instead of some approximate one. It also has the property that finds the first nearest destinations first which is suitable for you.

PHP/AJax Freemium Tools - How to Limit The Number of Uses Before Showing Results

I've done some searches for this and can't seem to find anything on it. I'm looking for a starting point to create freemium model tools.
The languages that I'll be using are PHP, Ajax and MySQL.
Here's what I would like to get done.
Any random user can use the free tools on the site, but after X number of uses, they are asked to register an account, otherwise, they can't use the tool for another 24 hours.
From what I've seen from other tools, it seems to be done through IP tracking and storing them in a DB. But I can see this getting pretty messy after hitting the millions of results.
Can anyone with experience provide guidance on how I can start limiting the number of uses? I just have no idea where to start at this point.
if they don't register with a email first then the only solution i think is IP it doesnt have to get messy if you set it up right. you just a table with column for ip column for counter and column for date and time.
then when you insert the data the same time you run another query to delete data older than 24 hours. Some guys use IP combined with device info.

How to quickly determine if multiple places are within users vicinity - Google Places API

I am designing a web app where I need to determine which places listed in my DB are in the users driving distance.
Here is a broad overview of the process that I am currently using -
Get users current location via Google's map api
Run through each place in my database(approx 100) checking if the place is within the users driving distance using the google places api. I return and parse the JSON file with PHP to see if any locations exist given the users coordinates.
If place is in users driving distance display top locations(limited to 20 by google places), other wise don't display
This process works fine when I am running through a handful of places, but running through 100 places is much slower, and makes 100 api calls. With Google's current limit of 100,000 calls per day, this could become an issue down the road.
So is there a better way to determine which places in my database are within a users driving distance? I do not want to keep track of addresses in my DB, I want to rely on Google for that.
Thanks.
You can use the formula found here to calculate the distance between zip codes:
http://support.sas.com/kb/5/325.html
This is not precise (door-step to door-step) but you can calculate the distance from the user's zip code to the location's zip code.
Using this method, you won't even have to hit Google's API.
I have an unconventional idea for you. This will be very, very odd when you think about it for the first time, as it does exactly the opposite order of what you will expect to do. However, you might get to see the logic.
In order to put it in action, you'll need a broad category of stuff that you want the user to see. For instance, I'm going to go with "supermarkets".
There is a wonderful API as part of google places called nearbySearch. Its true wonder is to allow you to rank places by distance. We will make use of this.
Pre-requisites
Modify your database and store the unique ID returned on nearbySearch places. This isn't against the ToS, and we'll need this
Get a list of those IDs.
The plan
When you get the user's location, query nearbySearch for your category, and loop through results with the following constraints:
If the result's ID matches something in your database, you have that result. Bonus #1: it's sorted by distance ascending! Bonus #2: you already get the lat-loc for it!
If the result's ID does not match, you can either silently skip it or use it and add it to your database. This means that you can quite literally update your database on-the-fly with little to no manual work as an added bonus.
When you have run through the request, you will have IDs that never came up in the results. Calculate the point-to-point distance of the furthest result in Google's data and you will have the max distance from your point. If this is too small, use the technique I described here to do a compounded search.
The only requirement is: you need to know roughly what you are searching for. However, consider this: your normal query cycle takes you anywhere between 1 and 100 google Queries. My method takes 1 for a 50km radius. :-)
To calculate distances, you will need Haversine's formula rather than doing a zip code lookup, by the way. This has the added advantage of being truly international.
Important caveats
This search method directly depends on the trade-off between the places you know about and the distance. If you are looking for less than 10km radii, use this method to only generate one request.
If, however, you have to do compounded searching, bear in mind that each request cycle will cost you 3N, where N is the number of queries generated on the last cycle. Therefore, if you only have 3 places in a 100km radius, it makes more sense to look up each place individually.

back-Updating tree of rows in DB from result of last update

I'm not sure this is even possible, but I would really like to be sure in order to create the most efficient code...
I need to build a query that works like an affiliate -
when a user signs in, I need to see if someone invited him, or if he got there alone (basic URL param). If he was invited, I need to give him X points. If the inviter was invited by someone, I need to give him 1/2X points, and if he was also invited, I need to give his inviter 1/4 points. I need this to be endless until the parent of all has no more "invited_by" values (null/0)...
I did this with a php (while x), a counter to calculate the amount of points - 1/(X sqr $i), but i'm not happy because it takes me a select and update query each time...
I'm using php and mysql.
Is there someone who can think of a better way to do this?
Thanks!
I think that you will need an iterative solution. I don't see any hierarchical query operators in the MySQL manual that might help you out (but that isn't quite the same as saying there aren't any; my eyesight isn't perfect). You could perform it in a stored procedure, which would reduce the cost of the operation (less data transferred between database server and client).
Also, the X/2, X/4, X/8, sequence isn't captured by the expressions 1/2X or 1/(X sqr $i); I assume that there was no intent to give the invitrr a fraction of a point whereas the inviter's inviter gets X/4 points?

search big database

I have a database which holds URL's in a table (along with other many details about the URL). I have another table which stores strings that I'm going to use to perform searches on each and every link. My database will be big, I'm expecting at least 5 million entries in the links table.
The application which communicates with the user is written in PHP. I need some suggestions about how I can search over all the links with all the patterns (n X m searches) and in the same time not to cause a high load on the server and also not to lose speed. I want it to operate at high speed and low resources. If you have any hints, suggestions in pseudo-code, they are all welcomed.
Right now I don't know whether to use SQL commands to perform these searches and have some help from PHP also or completely do it in PHP.
First I'd suggest that you rethink the layout. It seems a little unnecessary to run this query for every user, try instead to create a result table, in which you just insert the results from that query that runs ones and everytime the patterns change.
Otherwise, make sure you have indexes (full text) set on the fields you need. For the query itself you could join the tables:
SELECT
yourFieldsHere
FROM
theUrlTable AS tu
JOIN
thePatternTable AS tp ON tu.link LIKE CONCAT('%', tp.pattern, '%');
I would say that you pretty definately want to do that in the SQL code, not the PHP code. Also searching on the strings of the URLs is going to be a long operation so perhaps some form of hashing would be good. I have seen someone use a variant of a Zobrist hash for this before (google will bring a load of results back).
Hope this helps,
Dan.
Do as much searching as you practically can within the database. If you're ending up with an n x m result set, and start with at least 5 million hits, that's a LOT Of data to be repeatedly slurping across the wire (or socket, however you're connecting to the db) just to end up throwing away most (a lot?) of it each time. Even if the DB's native search capabilities ('like' matches, regexp, full-text, etc...) aren't up to the task, culling unwanted rows BEFORE they get sent to the client (your code) will still be useful.
You must optimize your tables in DB. Use a md5 hash. New column with md5, will use index and faster found text.
But it don't help if you use LIKE '%text%'.
You can use Sphinx or Lucene.

Categories