I'm working on a simple geolocation tracker based on this project and OsmAnd. OsmAnd uses URL parameters to send geolocation data to the web service, which then writes it to a file. The URL input into OsmAnd looks like
http://example.com/tracker.php?key=j2R1nrQ&lat={0}&lon={1}×tamp={2}&hdop={3}&altitude={4}&speed={5},
where the {#} is replaced by the location data by OsmAnd.
I have confirmed that OsmAnd is pinging my site with correctly formed data. If I open the link in a browser, it is correctly writing the data to the file, but when the app on my phone pings the page, it is not. PHP is run on the server, right? So why would it make a difference that an app on my phone is pinging the site vs. my browser?
I figured it out. The app was generating a 406 error on the server, which is caused by Apach Mod_security. Disabling Mod_security solved the issue.
Related
I have hosted a FileMaker database on the web using FileMaker Instant Web Publishing. I am trying to access it directly - as in bypass the login page - using the php command header().
This is the code I am currently using.
header("location: http://<serverIPaddress>/fmi/iwp/cgi?dbpath=%2Ffmi%2Fiwp%2Fcgi%3F-db%3D<databasename>%26-startsession&acct=account&name=<username>&password=<password>&login=Login&-authdb");
I got this code from here: http://lnx.acidsoft.net/problemsolved/bypass-filemaker-iwp-login-via-url.html
I don't believe the FileMaker part is as relevant, since accessing the database works fine when I remove all the other encoding stuff, I just can't bypass the login that way.
I believe it might be more of a problem with how the URL is structured and me not having enough experience to know how certain browsers or what language I'm using will affect the URL.
When I currently try to execute it, I get two dialog boxes.
First, I get one that says:
Bad Request
The server could not process your request due to a missing command: "”.
The second one says:
Bad Request
The server could not process your request because your session has timed out, been closed, or communication with the server has been lost.
Please reselect the database to begin a new session.
If you cannot open the database, please contact your database administrator.
Any ideas? I am using a button to call the .php file that calls this particular line, and I'm testing it on Safari and Google Chrome.
Kevin, I had same exact problem. Found an example of a working link here:
https://community.filemaker.com/thread/73562?start=0&tstart=0
...and the difference with my (failed) URL was the = in the url should be encoded to %3D
For me, having the actual = symbol made the link not work.
http://<DATABASE IP OR HOSTNAME>/fmi/iwp/cgi?dbpath=%2Ffmi%2Fiwp%2Fcgi%3F-db%3D<DATABASE NAME>%26-startsession&acct=account&name=<USERNAME>&password=<PASSWORD>&login=Login&-authdb
Good luck!
I am working on a scraping project to extract web data from a website. I have made a script to go through URLs and parse HTML contents and get the structured content into my database.The script was working fine,but recently the script got stuck and on investigation it was found that the target site is blocking our IP.
I am using PHP / CURL for this project,now I am getting a 403 error - Access Forbidden, error on a web request.
This has affected the working of my script,no pages could be retrieved from web request,every time I am getting an access restricting error.
I know there are lot of scraping etiquette's to be followed.Since we can't foresee how they had implemented the security features,I was confused on normalizing the web request calls.
I'm working on an amazon AWZ instance with an elastic IP,hence I am confused on when/whether they would lift the ban on my IP.
I have heard of rotating proxy methods to be used with scraping,such that the target server won't block you often.But I'm not sure about it's implementation.
Any help would be highly appreciated.I could provide any additional information if necessary.
sign in to the site to get an API id.
if you send a request to the site with API id and URL. it will send a request to the required URL with a random API and return a response.
just sign in and try it
signup
I have an application that retrieves some info and give them to user from a certain public website. However, i am not sure whether i should let my app immediately connect to the target website or it should get the info through my web server using a simple PHP script (JSON).
Actually I am using Jsoup to get the information and I tried both and they worked perfectly ( immediate and PHP) using Jsoup. However, I have not published my app yet due to the confusion aforementioned.
Use the web service. If your client has logic to parse the HTML, it can break when the web page changes. The web service can absorb this change and make corrections, but your client cannot. Not unless you release another version of your app, and that can be a pain.
I'm developing a project using Javascript, PHP and OpenLayers. A lot of maps are loaded using and HTTPS connection against an external OGC server.
When I try to load the map using HTTPS, they doesn't load (instead of, they show me an "Error loading the map, try again later").
I think that the problem is because of Digital Certificate. If I load directly from the server (using a WMS call) like this (look the last parameter):
https://serverurl/ogc/wms?service=WMS&version=1.1.0&request=GetMap&layers=ms1:lp_anual_250&styles=&bbox=205125.0,3150125.0,234875.0,3199875.0&width=306&height=512&srs=EPSG:4326&format=application/openlayers
The browser ask me for my authorization to see it. If i accept the Digital Certificate, I can see the map. After that, and because of my browser now accepts the certificate, I can see my own map from my own application.
So, the question is: Is there any way to ask for the Digital Certificate mannually when the user access to my web?
Thanks in advance!
PS: solutions using PHP are welcome too because I'm using CodeIgniter to load views
You could try opening the WMS URL in a div or perhaps a hidden iframe - that may cause the browser to pop up its 'Unknown cert' dialogue.
Im going to quote another user (geographika) from gis.stackexchange. I hope can help to someone with my issue:
You can use a proxy on your server so
all client requests are made to your
server, which deals with the
certificate, gets the request and
passes it back to the client. For PHP
have a look at
http://tr.php.net/manual/en/function.openssl-verify.php
If you are also using WMS software
(MapServer, GeoServer) you could
implement the same technique using a
cascading WMS server.
For details on how to do this in
MapServer see
http://geographika.co.uk/setting-up-a-secure-cascading-wms-on-mapserver
Im wondering what affect loading an external page with php has on a sites analytics. If php is loading an external page, and not an actual browser, will the javascript that reports back to google analytics register the page load as a hit?
Any JavaScript within the fetched page will not be run and therefore have no effect on analytics. The reason for this is that the fetched HTML page is never parsed in an actual browser, therefore, no JavaScript is executed.
Curl will not automatically download JavaScript files the HTML refers to. So unless you explicitly download the Google Analytics JavaScript file, Google won't detect the Curl hit.
Google offers a non-JavaScript method of tracking hits. It's intended for mobile sites, but may be repurposable for your needs.
You're misunderstanding how curl/file_get_contents work. They're executed on the server, not on the client browser. As far as Google and any regular user is concerned, they'll see the output of those calls, not the calls themselves.
e.g.
client requests page from server A
server A requests page from server B
server B replies with page data to server A
server A accepts page data from server B
server A sends page data to client
Assuming that all the requests work properly and don't issue any warnings/errors and there's no network glitches between server A and server B, then there is absolutely no way for the client to see exactly what server A's doing. It could be sending a local file. It could be executing a local script and send its output. It could be offshoring the request to a server in India which does the hard work and then simply claims the credit for it, etc...
Now, you CAN get the client to talk to server B directly. You could have server A spit out an HTML page that contains an iframe, image tag, script tag, css file, etc... that points to server B. But that's no longer transparent to the client - you're explicitly telling the client "hey, go over there for this content".