I'm trying to put some AIFF audio files behind a login wall on a PHP site (i.e. out of web root). The first challenge is that AIFF's are not supported in all browsers, but that's expected -- see http://www.jplayer.org/HTML5.Audio.Support/ For now I'm using Safari to test because it supports AIFFs.
What I can't figure out is why Safari treats the 2 versions of the same file differently. For the direct file, it cues up the player and it works. For the streamed file, the player doesn't work.
Regular Download
Here are what the headers look like when I download the file directly (i.e. if I temporarily put the file into web root for testing):
curl -v http://audio.app/Morse.aiff -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 192.168.10.10...
* Connected to audio.app (192.168.10.10) port 80 (#0)
> GET /Morse.aiff HTTP/1.1
> Host: audio.app
> User-Agent: curl/7.49.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.8.0
< Date: Sun, 06 Nov 2016 03:19:03 GMT
< Content-Type: application/octet-stream
< Content-Length: 55530
< Last-Modified: Sat, 05 Nov 2016 21:51:02 GMT
< Connection: keep-alive
< ETag: "581e5446-d8ea"
< Accept-Ranges: bytes
<
{ [5537 bytes data]
100 55530 100 55530 0 0 8991k 0 --:--:-- --:--:-- --:--:-- 10.5M
* Connection #0 to host audio.app left intact
Through PHP
And here are the headers when I stream the file through my PHP script (named source.php):
curl -v http://audio.app/source.php?file=Morse.aiff -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 192.168.10.10...
* Connected to audio.app (192.168.10.10) port 80 (#0)
> GET /source.php?file=Morse.aiff HTTP/1.1
> Host: audio.app
> User-Agent: curl/7.49.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.8.0
< Date: Sun, 06 Nov 2016 03:36:46 GMT
< Content-Type: application/octet-stream
< Content-Length: 55530
< Connection: keep-alive
< Last-Modified: Sat, 05 Nov 2016 21:51:02 GMT
< ETag: "581e5446T-d8eaO"
< Accept-Ranges: bytes
<
{ [8431 bytes data]
100 55530 100 55530 0 0 4915k 0 --:--:-- --:--:-- --:--:-- 5422k
* Connection #0 to host audio.app left intact
The headers are almost identical -- the only difference I can make out is the order of them and the hashing algorithm my local dev box is using for the ETag value.
Here is the test PHP script (named source.php) that I'm using to stream the same file (located above webroot):
// Adapted from http://php.net/manual/en/function.readfile.php
$filename = (isset($_GET['file'])) ? $_GET['file'] : null;
// <do sanitization here>
$file = dirname(dirname(__FILE__)).'/audio/' . $filename;
// Mimicking AIFF headers from curl headers (does not work!)
$content_length = filesize($file);
$last_modified = date("D, d M Y H:i:s", filemtime($filename)). ' GMT';
header("HTTP/1.1 200 OK");
header("Content-type: application/octet-stream");
header('Content-Length: ' . $content_length);
header('Last-Modified: ' .$last_modified);
// attempts to do the same thing as NGINX... md5_file() would probably work
$etag = sprintf("\"%xT-%xO\"", filemtime($filename), $content_length);
header("ETag: $etag"); // quoting it exactly
header("Accept-Ranges: bytes");
// Output the file
readfile($file);
The expected behavior is that the browser would treat both versions the same. In my sample HTML page (adapted from http://www.w3schools.com/html/html5_audio.asp), only the direct download works -- the version of the file that's coming through PHP does not play. The same behavior happens when I hit both files in a browser directly.
<!DOCTYPE html>
<html>
<body>
<h2>From Stream</h2>
<audio controls>
<source src="/source.php?file=Morse.aiff&breakcache=<?php print uniqid(); ?>" type="audio/x-aiff">
Your browser does not support the audio element.
</audio>
<hr/>
<h2>Direct Downloads</h2>
<audio controls>
<source src="/Morse.aiff" type="audio/x-aiff">
Your browser does not support the audio element.
</audio>
</body>
</html>
The same approach has worked to play mp3s (but the headers are slightly different). Does anyone know what I'm doing wrong here or does anyone know why this approach isn't working with AIFFs? I haven't yet tried this same test using another server-side language, but I suspect this isn't a PHP issue and has something to do AIFFs. Can anyone shed light on this?
Related
Don't know where to begin debugging this.
I have a local Apache server running a PHP backend that spits out a list of links from an API to the front.
...
<li>
Image
</li>
<li>
Image
</li>
...
Links are mixed both HTTP and HTTPS. I'm having a problem with Safari in particular. It appears to download the linked HTTPS image (HTTP opens fine in new tab) instead of viewing them in a new tab.
Expected behaviour: all links that have target="_blank" attribute should open the image in a new tab in all browsers.
Actual behaviour: all links open image in new tab in all browsers except for Safari (downloads jpg file instead)
cURL on HTTP links shows a 301 redirect (works fine in all browsers)
> GET /path/to/image1.jpg HTTP/1.1
> Host: hostpath
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Date: Mon, 20 Feb 2023 07:10:41 GMT
< Content-Type: text/html
< Content-Length: 178
< Connection: keep-alive
< Server: nginx
< Location: https://newpath.com/overHTTPS/image1.jpg
< Strict-Transport-Security: max-age=31536000; includeSubDomains; preload;
cURL on HTTPS links (these open in new tab fine in all browsers EXCEPT for Safari)
> GET /path/to/image2.jpg HTTP/2
> Host: hostpath
> User-Agent: curl/7.64.1
> Accept: */*
< HTTP/2 200
< content-type: image/jpg
< content-length: 150672
< last-modified: Thu, 24 Jun 2021 10:45:06 GMT
< x-amz-version-id: null
< accept-ranges: bytes
< server: AmazonS3
< strict-transport-security: max-age=31536000; includeSubdomains; preload
< date: Mon, 20 Feb 2023 07:16:15 GMT
< etag: "62a2466dbe39f0cd92908fa096ba9011"
< x-cache: RefreshHit from cloudfront
< via: 1.1 uid.cloudfront.net (CloudFront)
< x-amz-cf-pop: -cf-pop
< x-amz-cf-id: amz-cf-id==
cURL from totally different HTTPS as an experiment. (works! Safari opens this jpg to view in new tab just fine)
> GET /path/to/differentHTTPS/image2.jpg HTTP/2
> Host: m.media-amazon.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/2 200
< content-type: image/jpeg
< content-length: 13470
< server: Server
< date: Mon, 20 Feb 2023 07:29:44 GMT
< x-amz-ir-id: 6e4a2087-7e28-47ca-bef1-f332c0575d92
< expires: Sun, 15 Feb 2043 04:07:45 GMT
< cache-control: max-age=630720000,public
< surrogate-key: x-cache-214 /images/I/51U-ZNaX5sL
< timing-allow-origin: https://www.amazon.in, https://www.amazon.com
< edge-cache-tag: x-cache-214,/images/I/51U-ZNaX5sL
< access-control-allow-origin: *
< last-modified: Sat, 24 Jul 2021 09:53:23 GMT
< x-nginx-cache-status: HIT
< accept-ranges: bytes
< via: 1.1 uid.cloudfront.net (CloudFront)
< server-timing: provider;desc="cf"
< x-cache: Miss from cloudfront
< x-amz-cf-pop: -cf-pop
< x-amz-cf-id: cf-id==
<
For the most part, my original HTTPS origin and the test HTTPS origin have near identical response headers.
Might be how Safari treats requests to HTTPS resources from insecure HTTP origins (security?). So I deployed to my server which hosts everything over HTTPS; still exact same problem. Safari just will not open a .jpg from this external HTTPS origin in a new tab, it always downloads it.
I swapped in a totally different HTTPS link to an image, and it WORKS. Opens the image to view in a new tab, DOESN'T DOWNLOAD. Just not from the other HTTPS source.
Requests headers from all browsers and accepting image/*.
Any ideas on how I can dig through this? Not sure what else I can try!
I've run into a strange issue, and I'm at a loss as to my next steps in debugging. I'm hoping the community can pitch in some ideas.
I'm using the following stack:
PHP 7.2.34-21+ubuntu16.04.1+deb.sury.org+1 (fpm-fcgi) (built: May 1 2021 11:52:36)
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
with Zend OPcache v7.2.34-21+ubuntu16.04.1+deb.sury.org+1, Copyright (c) 1999-2018, by Zend Technologies
Server version: Apache/2.4.39 (Ubuntu)
Wordpress 4.9.16
On AWS services. Obviously, there's some overhead due to the framework but I can dumb the code down to something along the lines of this :
// Framework stuff + file handling logic
$data = 'Imagine this is a 1000-byte binary message (pdf).';
// Headers, 335 bytes large
header('date: Sat, 03 Jul 2021 05:06:41 GMT');
header('server: Apache/2.4');
header('expires: Wed, 11 Jan 1984 05:00:00 GMT');
header('cache-control: no-cache, must-revalidate, max-age=0');
header('content-disposition: inline; filename=window-sticker.pdf');
header('strict-transport-security: max-age=1209600;');
header('content-length: 1000'); // Actually use strlen($data); for this
header('x-ua-compatible: IE=edge');
header('cache-control: public');
header('content-type: application/pdf');
echo $data;
exit();
Now here's the kicker. This works fine on a bunch of other sites, that as far as I can tell use the same apache sites-enabled configuration and similar .htaccess files. But it might still be a server/network/etc.. type error so I could be missing something.
I have this one site, however, where this code breaks in the following way:
Tools that don't enforce content-length download/display this perfectly fine (chrome for instance).
Tools that do keep track of content-length fail or throw a notice (safari, curl). Curl gives me :
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7ff13c008200)
> GET /redacted/path/to/controller?p=abcdef HTTP/2
> Host: www.redacted-somesite.com
> User-Agent: curl/7.64.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0< HTTP/2 200
< date: Sun, 04 Jul 2021 17:36:24 GMT
< server: Apache/2.4
< expires: Wed, 11 Jan 1984 05:00:00 GMT
< cache-control: no-cache, must-revalidate, max-age=0
< content-disposition: inline; filename=window-sticker.pdf
< strict-transport-security: max-age=1209600;
< content-length: 1000
< x-ua-compatible: IE=edge
< cache-control: public
< content-type: application/pdf
<
* HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
* stopped the pause stream!
0 1000 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0
* Connection #0 to host www.redacted-somesite.com left intact
curl: (92) HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
* Closing connection 0
Things I have checked:
The content length IS correctly set, the body is the same size as what is set in that header.
The data IS being output since tools like chrome can get the full file
Removing the content-length header makes this work in all tools.
And here we are, I'm not sure why things are failing. My current theory is that somehow, for this site some silent error might be writing to the buffer before the headers are sent out. But when I check the binary data sent from the server in a hex tool, it's an exact match.. So I'm at a loss. Maybe there's some compression layer screwing with me?
Any ideas would be amazing.
Thanks!
Update to match OP's latest edit:
The tools you mention seem to no longer use HTTP v1.1 protocol,
See similar answer about cURL's "HTTP/2 stream 0 was not closed cleanly" error.
Old answer:
Content-length should be exactly the same as file-size,
but if it is, maybe you are missing some other headers, like:
Content-Transfer-Encoding: binary
If all headers are there, check max PHP execution.
See also example of download with resume support of another post.
I am working on a site where the index page is loading too slow while all the inner pages are working quickly.
This is showing that the "Wait" time is the highest but I think if anything is wrong on the website then how can it be possible that the inner pages work fine? I am using Magento 1.9. I already tried all the ways available on Google to boost up the speed of website.
curl -o /dev/null -w "Connect: %{time_connect} TTFB: %{time_starttransfer} Total time: %{time_total} \n" http://toolsandgear.com.au
Connect: 0,179 TTFB: 9,287 Total time: 10,188
TTFB is more than 9 sec. It's too long - this mean what you have TTFB (Time To First Byte) issue. In most cases TTFB issue belongs to wrong server settings or to overhead during page generation on server side (too many hard sql-queries on page, etc). More detailed:
time curl http://toolsandgear.com.au/ -v --output /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 108.167.181.160...
* Connected to toolsandgear.com.au (108.167.181.160) port 80 (#0)
> GET / HTTP/1.1
> Host: toolsandgear.com.au
> User-Agent: curl/7.42.0
> Accept: */*
>
0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0< HTTP/1.1 200 OK
< Server: nginx/1.6.2
< Date: Thu, 30 Apr 2015 16:19:47 GMT
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Expires: Thu, 19 Nov 1981 08:52:00 GMT
< Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
< Pragma: no-cache
< X-Frame-Options: SAMEORIGIN
< Set-Cookie: frontend=b024b8022ebb90a78afff9ccdb472764; expires=Thu, 30-Apr-2015 17:19:37 GMT; path=/; domain=toolsandgear.com.au; HttpOnly
< Set-Cookie: sns_furni_tpl=sns_furni; expires=Tue, 19-Apr-2016 16:19:37 GMT; path=/
<
{ [9577 bytes data]
100 217k 0 217k 0 0 20380 0 --:--:-- 0:00:10 --:--:-- 60239
* Connection #0 to host toolsandgear.com.au left intact
real 0m10.917s
user 0m0.017s
sys 0m0.007s
Common Nginx settings seems to be fine. Try to optimize Nginx configuration (cache/gzip compression) and profile Magento home page for catching long SQL queries, cycles in cycles, etc.
I think the root cause of this is my general misunderstanding of how the Facebook API works, so I hope someone with a bit more knowledge can point me in right direction.
All I'm trying to do is return a facebook gallery for one of our clients onto two different pages, hosted on different servers. I use this format on one page:
$albums = json_decode( file_get_contents('http://graph.facebook.com/'.$facebook_ID.'/albums') );
And this works fine, I get what I need. However, doing this on the other site gives me this error:
"message":"An access token is required to request this resource."
Does it really need an access token if all I am doing is requesting a public gallery? To further confuse me, if I simply put this in my browser:
http://graph.facebook.com/$facebook_ID/albums
I get all the required info. This tends me towards thinking it's not a domain issue?
Thanks!
--- EDIT ---
Here's some more info with curl.
First - the request that works, from my local box:
* About to connect() to graph.facebook.com port 80 (#0)
* Trying 69.171.242.27...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to graph.facebook.com (69.171.242.27) port 80 (#0)
> GET /370438539411/albums HTTP/1.1
> User-Agent: curl/7.29.0
> Host: graph.facebook.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Access-Control-Allow-Origin: *
< Cache-Control: private, no-cache, no-store, must-revalidate
< Content-Type: application/json; charset=UTF-8
< ETag: "2829e31bfef4b737cdb31aab0f73c8ad35826012"
< Expires: Sat, 01 Jan 2000 00:00:00 GMT
< Pragma: no-cache
< X-FB-Rev: 801567
< X-FB-Debug: /uN6PrzpTWLLaJOn8vuww0ECYjineJN6P9w/DvvVczY=
< Date: Wed, 01 May 2013 12:01:26 GMT
< Connection: keep-alive
< Content-Length: 35483
<
ALL THE THINGS
And then here is the request from our live server - an EC2 instance ( if this is relevant )
* About to connect() to graph.facebook.com port 80 (#0)
* Trying 173.252.101.26... % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0connected
* Connected to graph.facebook.com (173.252.101.26) port 80 (#0)
> GET /370438539411/albums HTTP/1.1
> User-Agent: curl/7.29.0
> Host: graph.facebook.com
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Access-Control-Allow-Origin: *
< Cache-Control: no-store
< Content-Type: application/json; charset=UTF-8
< Expires: Sat, 01 Jan 2000 00:00:00 GMT
< Pragma: no-cache
< WWW-Authenticate: OAuth "Facebook Platform" "invalid_token" "An access token is required to request this resource."
< X-FB-Rev: 801567
< X-FB-Debug: wSxKF5MlCAmEFf2BuYRBDotWWreR6/t5m5mebc8vDXw=
< Date: Wed, 01 May 2013 12:03:38 GMT
< Connection: keep-alive
< Content-Length: 112
<
^M 0 112 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{ [data not shown]
^M100 112 100 112 0 0 181 0 --:--:-- --:--:-- --:--:-- 212
* Connection #0 to host graph.facebook.com left intact
* Closing connection #0
{"error":{"message":"An access token is required to request this resource.","type":"OAuthException","code":104}}
(END)
There might be some or the other thing that might make or break access to public data like they way you are accessing. As you have already experienced this, I would like you to go safe on this and instead create an App and use an App Access Token to query for the public data instead of going to it directly as Facebook might in near future even change this..
I've been using NetBeans as the XDebug interactive debugging client. But seems like it only supports attaching debuggers to scripts that are invoked via Firefox. I want to step through the request parse script when it's invoked via cURL.
I figured out the answer. First I attached a debugger by right-clicking on the wordpress project in netbeans and choosing "debug". This will open the blog in firefox with the "XDEBUG_SESSION_START=netbeans-xdebug" param included in the url (e.g. "http://localhost/wordpress/?XDEBUG_SESSION_START=netbeans-xdebug").
Then I invoked cURL from the command line, making sure to set a cookie with the name/value XDEBUG_SESSION/netbeans-xdebug:
>curl "http://localhost/wordpress/wp-app.php/posts" -X POST -H "Content-type: application/atom+xml" -v -L -k -u admin:password --data #post_atom_entry_bad.xml -o post_bad_response.txt -b XDEBUG_SESSION=netbeans-xdebug
* About to connect() to localhost port 80 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 80 (#0)
* Server auth using Basic with user 'admin'
> POST /wordpress/wp-app.php/posts HTTP/1.1
> Authorization: Basic YWRtaW46d2Fuc3Vp
> User-Agent: curl/7.19.1 (i586-pc-mingw32msvc) libcurl/7.19.1 OpenSSL/0.9.8i zlib/1.2.3
> Host: localhost
> Accept: */*
> Cookie: XDEBUG_SESSION=netbeans-xdebug
> Content-type: application/atom+xml
> Content-Length: 302
>
} [data not shown]
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 302 0 0 0 302 0 74 --:--:-- 0:00:04 --:--:-- 0
At this point cURL halts at the breakpoint I have set at line 283 in wp-app.php, AtomParser->handle_request(), and I can step through the code.
Once I click F5 (continue), the server sends the response back to cURL:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 302 0 0 0 302 0 2 --:--:-- 0:02:17 --:--:-- 0< HTTP/1.1 400 Bad Request
< Date: Mon, 15 Dec 2008 17:47:06 GMT
< Server: Apache/2.2.9 (Win32) DAV/2 mod_ssl/2.2.9 OpenSSL/0.9.8i mod_autoindex_color PHP/5.2.6
< X-Powered-By: PHP/5.2.6
< Content-Length: 0
< Connection: close
< Content-Type: text/plain
<
100 302 0 0 0 302 0 2 --:--:-- 0:02:18 --:--:-- 0* Closing connection #0
>
Done. Would be great to get examples from other interactive debugging clients like notepad++.
Related XDebug docs: http://www.xdebug.org/docs/remote#browser_session