I am a newbie onto GTFS and found some information on my research that said one has to provide the GTFS feed in txt formats to obtain the routes/transit information, etc.
Now my question is,
1) If we create our own txt formats, and upload onto the GTFS feed provider, would it result in showing up on the google maps as well?
2) I would like to have my own GTFS server code which will take data from my DB and process it and provide the best transit routes. Is it possible? Assume I have the ability to run Python as well as PHP scripts.
Any help would be best appreciated!
Thanks in advance
No, since you would need to enter into an agreement with Google for them to use your data, and they're unlikely to take you seriously unless you're affiliated with an actual transit agency. But if you're curious you can read about the steps involved.
Yes, it's possible, and there are open-source routing engines available for you to use, like OpenTripPlanner and Graphserver. This is pretty heavy-duty stuff, however. If what you have is a basic Web-hosting account and you just want to do "something interesting" with transit data, setting up an online trip planner is probably not the place to start.
I think the most straightforward solution would be for you to run OpenTripPlanner on a server of your own. This would provide your users with a familiar-looking website they can use to generate trip plans from your data while leaving you complete control over the data itself.
Note that running OpenTripPlanner would require a fairly powerful server plus map data from OpenStreetMap (which I'm assuming is available for your area) in addition to your own transit data. On the project's website you'll find setup instructions for Ubuntu to give you an idea of what's involved.
I'm assuming you're already able to generate a GTFS bundle; that is, to produce a ZIP file containing comma-separated data files as specified in the GTFS Reference. With an OpenTripPlanner server set up, your workflow would be as simple as
Making changes to your transit data.
Generating a new GTFS bundle.
Uploading the bundle to a specific folder on your OpenTripPlanner server.
Restarting OpenTripPlanner.
Optionally, notifying your users of the changes.
Every step except the first could be automated with a script.
In response to your first question, Google needs to be informed of the transit feed. Here's the latest link from Google to get you started [https://support.google.com/transitpartners/answer/1106422]. They also require confirmation from an authorised representative of the transit agency that this is an authorised GTFS feed. I should note that txt formats is not strictly correct. The file you need to create is a GTFS file (General Transit Feed Specification). In essence this is a zipped file of mandatory and optional txt files, in CSV format. For you to create the GTFS file, you'll need to create the multiple files based on a detailed understanding of GTFS or use a GTFS Editor/GTFS API like what can be found at AddTransit.
2) You can install routing software on your own servers. However, if Google is using your GTFS data, then another alternative is to create a simple form on your website for customers to enter their from & to locations. You can then use Google's Transit options on Maps to return the proposed route. Here's a simple example that you could extend to meet your needs: https://addtransit.com/blog/2016/01/add-google-maps-public-transport-directions/
Related
I've created a rather large Business Intelligence System over the years. It already offers vCard export of customers and contacts. But this of course is a tedious process, where each employee has to manually import the vCard file on their phones.
So I was reading about CardDAV and thought this might be a better solution. However, all I really need is to be able to provide a read-only source for contacts. It should not be possible for anyone to make changes to the contacts (well, except temporarily in their own phonebook - until next synchronization happens). And all other functionality isn't interesting either. I only need the "synchronize contacts from BI to phones" part.
I was hoping it would be simple. Something along the lines of just using the url to the vCard generated file (or PHP file that generates it). But I can see this question has been asked a few times before, and no one has given any answers, so I guess it's not as simple as that.
Can anyone share some light on this? Is it possible to just provide a simple read-only url that is compatible with the CardDAV protocol?
And if not - are there then some other protocol that supports something like that?
It isn't possible with a single endpoint URL, but it isn't super complicated either. To make it read only, you'll reject PUTs w/ a "403 Forbidden" and optionally also add the relevant WebDAV permission properties (though many clients might ignore those).
You'll need:
One endpoint for the CardDAV principal record, this represents the user accessing your system and points the client to the "CardDAV home". It is a simple XML document in response to a PROPFIND.
One endpoint for the CardDAV "home", this is a WebDAV collection that contains the "contact folders" you expose, quite likely just one. This is also a simple XML, again hit with a PROPFIND.
One endpoint representing the CardDAV contacts folder, this is the thing pointing to the actual vCard resources. An XML that lists the URLs for the contained vCards, again hit with a PROPFIND.
Well, and one endpoint for each vCard, this is queried w/ a GET. (and if you want to allow creation/modification/deletion, w/ a PUT or DELETE)
This is a great resource on the protocol: https://sabre.io/dav/carddav/
Another option might be LDAP, but that's a little more complicated than CardDAV. (You could use openldap to serve the protocol, and fill it using LDIF files).
I have a client who has a feed of leads which have Name, IP, Address, OptIn Time/Date, and I want them to be able to post the data to my hosted SQL database. If you are familiar with lead generation you will get what Im trying to do.
Id also like to know if its possible to write a script and place it on my server so that when someone posts a CSV file to it I can have the script automatically post the data in the CSV to the SQL server.
Is this possible? And are there any tutorials our reference manuals, sources, etc. I can use to accomplish this?
The answer to your question is Yes.
You can go about this two ways:
Write an API for your database which is consumed by those wishing to search/write/query your database. To do this, you can use any language that you are comfortable with. PHP, XML and Python are not interchangeable. XML is a format specification, it describes what the data should look like when its being transported between two systems. So you can use any programming language that provides XML libraries to write your code. In addition to XML, JSON has emerged as the more popular transport format especially for mobile and web applications.
The second option is to use a service like apigee, google cloud endpoints and mashery which will automate a lot of this process for you. Each requires its own amount of effort (with google cloud endpoints perhaps requiring the most effort). For example apigee will automatically create an API for you as long as you can provide it access to your data source.
I've been on this one for the last 2 days...
All I want to do is, with my username and password, access my Google Calendar events/calendars and create, edit or delete them with my PHP web application.
I've looked in the Google API, Zend Framework, OAuth 2.0 with google, etc and there is always a piece of information missing to make it work.
What is the quickest and simplest way, on a shared server to get to access my gcalendar? Is there a class/php function file somewhere that I can simply install in a www.websitename.com/include directory that will not make me want to pull my hair out?
I want to use already made functions like this : updateevent(username, pass, id, calendar, title, details, location, etc);
New APIs can be a little scary, especially to programmers who are still somewhat new to a particular platform. Google's Calendar API is very well-documented (like all of them) -- but you just have to read them. It's easy once you get used to it.
http://code.google.com/apis/calendar/v3/using.html
It basically involves this:
Acquire an API key
Include two files in your PHP code
Authenticate (steps up to this point only take a few lines of code)
Do what you need to do!
It's the only official and truly the easiest way to directly work with Google Calendar. What piece of information is still missing?
I am defining out specs for a live activity feed on my website. I have the backend of the data model done but the open area is the actual code development where my development team is lost on the best way to make the feeds work. Is this purely done by writing custom code or do we need to use existing frameworks to make the feeds work in real time? Some suggestions thrown to me were to use reverse AJAX for this. Some one mentioned having the client poll the server every x seconds but i dont like this because it is unwanted server traffic if there are no updates. I was also mentioned a push engine like light streamer to push from server to browser.
So in the end: What is the way to go? Is it code related, purely pushing SQL quires, using frameworks, using platforms, etc.
My platform is written in PHP codeignitor and DB is MySQL.
The activity stream will have lots of activities. There are 42 components on the social networking I am developing, each component has approx 30ish unique activities that can be streamed.
Check out http://www.stream-hub.com/
I have been using superfeedr.com with Rails and I can tell you it works really well. Here are a few facts about it:
Pros
Julien, the lead developer is very helpful when you encounter a problem.
Immediate push of new feed entries which support PubSubHubHub.
JSon response which is perfect for parsing whoever you'd like.
Retrieve API in case the update callback fails and you need to retrieve the latest entries for a given feed.
Cons
Documentation is not up to the standards I would like, so you'll likely end up searching the web to find obscure implementation details.
You can't control how often superfeedr fetches each feed, they user a secret algorithm to determine that.
The web interface allows you to manage your feeds but becomes difficult to use when you subscribe to a loot of them
Subscription verification mechanism works synchronous so you need to make sure the object URL is ready for the superfeedr callback to hit it (they do provide an async option which does not seem to work well).
Overall I would recommend superfeedr as a good solution for what you need.
When you place a .torrent file for download in your website, how can you get the number of Seeds & Peers for that Torrent and inform the user of them?
You have to contact the tracker(s) that is shown in the torrentfile.
If the tracker support "scraping" that is probably the request you want. Otherwise its up to the tracker to decide how many peers it wants to return to you, and you have no idea if those peers is a seed or leech before contacting them.
The torrentfile is in bencoded format, look for the bdecode php library to easily parse the info.
Provide the infohash you get from the pieces information in the metadata and the tracker will respond if you follow the protocol, read http://en.wikipedia.org/wiki/BitTorrent_%28protocol%29 for more information
You would scrape the tracker by sending an HTTP GET request to it with a URL formed as described at http://wiki.theory.org/BitTorrentSpecification#Tracker_.27scrape.27_Convention -- the scrape URL is derived from the announce URL(s) in the metainfo's "announce" and "announce-list" keys.
The tracker's response is described in that same wiki.theory.org link. It includes the seeder/leecher counts that you're looking for.
Note that modern .torrent files typically have several trackers included in their announce-list, so you may want to scrape more than one for better information. However you've got no way of knowing which peers overlap from tracker A to tracker B, so the best you can really do from scraping multiple trackers is to come up with a range of the minimum/maximum number of leechers and seeders in the swarm.