What I have found till now is that:
proxy (squid) throws error code 417. This is due to HTTP/1.1 header "Expect: 100-continue" which squid does not handles properly
On Suppressing "Expect: 100-continue" header, curl returns incorrect header size
How do I proceed from here ?
I really hate patronizing answers that indicate the person asking the question is an idiot. You see it all the time on this site and it's getting annoying.
for squid try this configuration directive:
ignore_expect_100 on
If the Squid proxy MUST be used AND you cannot fix Squid, then you only have one solution: tunnel the API calls through to a server outside of your network and have that server forward the API calls to Amazon S3 on your behalf.
From a basic view you can just replicate all the S3 calls you use on your external server but you must be aware of the security implications, i.e. restricting the usage of the server to say the external IP address of your Squid server, or even API keys much like Amazon use themselves.
If more flexibility is available try another proxy preferably non-caching like Pound.
Related
I've searched a lot but I couldn't find a PHP proxy server which can be run on a shared host, so I decided to build a very simple one from scratch but I'm still on the first step. I've created a subdomain httpp.alvandsoft.com and also redirected all its subdirectories (REQUEST_URI) to the main index.php to be logged and to know what whould a proxy server exactly receive and send
(The log is accessible through https://httpp.alvandsoft.com/?log=1&log_filename=log.txt)
But whenever I set it as a proxy for Telegram or other apps, it doesn't receive ANY requests at all, even when I use 443 or 80 ports, neither in different proxies such as HTTP, SOCKS or MTPROTO.
Is proxy something that depends on the server's settings and works in a way other than regular HTTP requests and responses or I'm missing something?
I found it out myself. HTTP(s) proxies send their requested URL as Host request header and many hosts and websites, check this request header and if it's not a member of their valid IPs, redirect it immediately.
I have a web service access from my server IP, But I want to make an script and place on my server to let enduser connected to the script as relay for WSDL (requests and responses - SMS Services).
It's like a branch from the base. however I need to count requestes on my scripts too.
Unfortunately I don't have access to WSDL server code.
Also the enduser have to use the original WSDL (SOAP - php) client and I can't change enduser codes too.
So I have to ask enduser to connect to my server just like the original WSDL server then relay any request to WSDL server and same for resposes.
I can write a big class and Peer to Peer with many lines of codes. but I think there are better way for do this.
Any better Idea or solution?
To provide service orchestration, as SOAP web services are service interface based, it's better to implement the source WSDL on your side. Also, you have an option to set up a pass-through reverse proxy and the key point is rewriting SOAP requests and response address URL to your proxy server address. There are some solutions:
Use Nginx as a reverse proxy here is a related article by Jeff Geerling.
With WSO2 ESB you can set up pass-through SOAP proxy and inject your logic code, as far as I know, it supports PHP (I already did it with Java).
Write an HTTP reverse proxy program with PHP and don't forget to rewrite SOAP, WSDL requests and response address URLs with proxy server address. Here is an example.
You have another option to orchestrate the SMS service and counter service as a Restful service and make it transparent for the client.
Currently I'm developing an REST API
my API access are only between my server and my client server(B2B, business to business). example : myserverapi.com(My REST API Server) and myclientserver.com(My Client Server who access My API) *no 3rd connection/application
we are implementing api_key(of course it a must), and domain name(so the client specific the domain name that he will access the api, so my server api will only accept from that)
for myserverapi.com, how to only receive connection from myclientserver.com ? is only using $_SERVER['REMOTE_ADDR'] ? Is that enough ? but after reading from a few place, that i can't use that because the IP may be wrong if it under proxy or loadbalancer farm. What's the solution ?
and how if i installed SSL Certificate ? is i must change my code ? or just buy and install on the server side that automatically my api will be secure ?
is openssl-verify function only the secure way to realy get know that the access are from specific server or not ? http://php.net/manual/en/function.openssl-verify.php Is that mean i must change my code to encrypt and decrypt data like in this link http://3stepsbeyond.co.uk/2010/12/openssl-and-php-tutorial-part-two/
so basically i just want to make sure for myserverapi.com to only get access from myclientserver.com. and the myclientserver.com only accept data from myserverapi.com. how to do that ?
I hope some one give me a good explain.
Thank You
There's a series of things you can do. The most code agnostic is HTTPS client certificates. If you don't care about who the user exactly is, but just want to make sure that is an allowed one let apache handle that. It's only a few lines in the config file, and you won't have to touch your code.
Your second option is ACLs, which you can handle again within Apache even at the path level. And of course at an OS level you can apply them as well to IPs and ports. Same applies to a firewall in or in front of your server/servers.
If you don't want to deal with managing certificates or IP/port firewalling and ACLs you can implement 2 legged Oauth. The clients can use a library since they exists in virtually any language already, and the code for the server is not too complicated.
For the SSL Certificate, you don't have to alter your code. You may, however, want to check the server port and throw an exception if not SSL:
if ($_SERVER['SERVER_PORT'] != 443) {
header('405 Method Not Allowed');
exit();
}
Of course, you could accomplish similar restrictions using Apache, by simply not serving out on port 80.
For the IP restrictions, $_SERVER['REMOTE_ADDR'] should work under most conditions. Unfortunately, I don't think there's a way around proxies. However, an API key should be sufficient for most security requirements.
A little late for an answer, but for checking remote_addr for requests that might come via proxy you can check if
X-forwarded-for header is set, and use that if it exists.
http://en.wikipedia.org/wiki/X-Forwarded-For
I am using Google GeoCoding services.
I have a PHP application which calls the Google Maps API, and receives JSON data.
The function which calls Google Maps host hangs until it times out, but only when I push to Godaddy Virtual Private server.
I have already ssh'd into the server and edited php.ini
I changed "safe mode" to "off"
I get this error message:
Message:
file_get_contents(http://maps.googleapis.com/maps/api/geocode/json?address=xYxY&sensor=false):
failed to open stream: Connection timed out
This works fine in my WAMP server but fails on live server. Any ideas why?
I have found the answer. What has been a week, now? I hope others find this solution. The virtual dedicated servers from GoDaddy are ipv6 enabled, but google maps API is having none of that. So tell Curl to force v4 request, like this:
curl_setopt($ch, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4 );
credit is due in part to a blog where I found this information:
http://www.businesscorner.co.uk/disable-ipv6-in-curl-and-php/
Rather than just disabling IPv6, you can try to connect over one IP version and then swap to the other if the first attempt fails. This makes your implementation more robust to temporary routing issues on your and the remote end.
You can get this behavior in file_get_contents() by binding the connection to an interface with either inet6 or inet, and then try the other address family if the first attempt returns FAIL. I wrote up how to make file_get_contents() more routing-robust and dual-stack for anyone that is interested. I also shows you how to force connections to use IPv4 or IPv6 if you prefer to go down that route.
Use curl for getting external data. Many shared servers prevent use of file_get_contents for external data (http://www.php.net/manual/en/filesystem.configuration.php#ini.allow-url-fopen) due to security risks.
Plenty of curl examples online.
Check your network. Google doesn't block such request.
Check here.
I want a way to allow users to go through my http proxy server (Squid, Privoxy, etc.) without having to type the IP/port in web browser settings. I was hoping I could use a simple web interface.
I'm envisioning this:
User goes to a website on my server (http://proxy.com) and types a URL
into the form.
The user's browser URL looks like (http://proxy.com/url=URL)
All connections to any future links are passed through my http proxy
running on a different port.
And I do NOT want to use existing php/cgi web proxy scripts.
My only reasoning for that is I feel it would be much more efficient re-routing connections through a native proxy server than having many php instances proxy the connections. Please tell me if you think this would not actually be the case.
Are there any simple ways of doing this? Thanks!
You may want to setup a transparent proxy. That way the clients do not know they are using a proxy so they do not have to set the proxy IP in their browsers. This obviously does not work for https. Some information for squid here: http://www.cyberciti.biz/tips/linux-setup-transparent-proxy-squid-howto.html