i searched several sites, several topics in stackoverflow tried several ways to solve it
reinstalled composer
reinstalled xampp
changed IPv6 to IPv4 -> temporary resolved it
tried on another pc -> same network
i took my github token saved it in composer.json and got another error
used vpn as suggested
i cloned laravel in github but when i need update its composer "autoload", i still get same error on randomized subjects sometimes laravel/symfony and so on..
i tried changing php.ini according to Fileinfo not working in Xampp v3.2.1 [duplicate] : first of all there was no ";extension=php_fileinfo.dll" instead i found extension=fileinfo which itself wasnt commented
for its timeout i changed extention_max from 120 to 360 // my ram is 16
im in an online php boot-camp no one else have same error and no one couldnt help me when i asked for help
i am ready to provide more info , maybe i tried a good way in a wrong manner
appreciate.
- Downloading laravel/sail (v1.16.2)
Failed to download symfony/http-foundation from dist: curl error 28 while downloading https://api.github.com/repos/symfony/http-foundation/zipball/90f5d9726942db69490fe467a3acb5e7154fd555: Operation timed out after 10008 milliseconds with 0 out of 0 bytes received
Now trying to download from source
- Syncing symfony/http-foundation (v6.1.5) into cache
Cloning failed using an ssh key for authentication, enter your GitHub credentials to access private repos
When working with _public_ GitHub repositories only, head to https://github.com/settings/tokens/new?scopes=&description=Composer+on+Mom+2022-10-06+2353 to retrieve a token.
This token will have read-only permission for public information only.
When you need to access _private_ GitHub repositories as well, go to https://github.com/settings/tokens/new?scopes=repo&description=Composer+on+Mom+2022-10-06+2353
Note that such tokens have broad read/write permissions on your behalf, even if not needed by Composer.
Tokens will be stored in plain text in "C:/Users/whowe/AppData/Roaming/Composer/auth.json" for future use by Composer.
For additional information, check https://getcomposer.org/doc/articles/authentication-for-private-packages.md#github-oauth
Token (hidden):
new tries:
as suggested by mr. NicoHaase i reviewed IPv6 Section
the result was as below:
Downloading laravel/laravel (v9.3.8)
Failed to download laravel/laravel from dist: curl error 28 while downloading api.github.com/repos/laravel/laravel/zipball/…:
Operation timed out after 10005 milliseconds with 0 out of 0 bytes
received Now trying to download from source
(i can take screenshot of every step i made)
all limits lifted my problem sloved by itself
but i think the lost answer was about the proxy (31/10/2022)
Use proxy!
if you are in linux:
export http_proxy='your_proxy'
export https_proxy='your_proxy'
then use composer command
If using socks5:
export http_proxy=socks5://ip:port https_proxy=socks5://ip:port
then use composer command.
I guess you're iranian as same as I am
Related
suddenly in my project with laravel 8 I ran composer to uninstall a dependency that I want to reinstall for a sense nothing more than order and start from scratch and I started to throw this error that has to do with Symfony Process:
PHP Fatal error: Uncaught TypeError: fclose (): Argument # 1 ($ stream) must be of type resource, bool given in phar: // C: /ProgramData/ComposerSetup/bin/composer.phar/vendor/symfony/process/ Pipes / WindowsPipes.php: 71
What can be the mistake? I read something like that has to do with the update of the symfony Process but I don't know why. The only thing I did was install Laravel / Passport for the use of token in the user login.
Whilst running PHP 8 on Windows I too had this error. I tried to clear out the temp directory manually per Composer loading from cache - when that didn't work. I also found that composer's symfony usage had locked a temp file in an odd way.
I needed to clear the temp file, and I used filelocker to unlock it, easily enough. Once I had unlocked and deleted the file, I was able to run composer as expected again.
Here is a related stack overflow question about the temp file location: Composer install: error on temporary file (%USERPROFILE%\AppData\Local\Temp works for me)
They have names like 'sf_proc_00.err'. I found them easily by sorting the temp files by date, and only trying to remove the ones modified today.
A reboot, or identifying the symfony process tying up the temp file would also work. According to file locked - it was an instance of mingw git for me.
I have set up satis private composer packet manager.
Satis is running on "packages.asc.company", I protected the site by apache2 http basic authentication and can open it in browser by entering http basic auth credentials.
Now my question: How can I pass composer the credentials to access the satis site in the best and most secure manner when running e.g. "composer update"?
Currently I registered only one user with a password in apache .htpasswd file and need to pass its credentials somewhere to be able to connect from composer to satis.
There are two cases where I need to connect from:
1) From the project during development
2) From jenkins during continuous integration process.
3) Edit: SSL
I am trying to use openssl now to secure the credentials when logging in. On my linux, where the apache runs, I created a private key and a .crt file (see: Apache SSL . On my linux, I can now open the satis packages page with https, and I even redirect http to https, all working (Im using my own certificate generating with openssl because its an internal application and I don't need a trusted ca).
Now, when I switch to my Windows from my Linux vm (here I am coding), and I try to run composer update, I get the following error message:
(hosts file is configured correct)
[Composer\Downloader\TransportException]
The "https://packages.asc.company/packages.json" file could not be downloaded: SSL operation failed with code 1. OpenSSL Error messages:
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
Failed to enable crypto
failed to open stream: operation failed
What did I miss? I'm pretty new to ssl, but read the whole day now information about it and can't get it to work.
From getcomposer satis site, I have this information but don't know how to use it.
{
"repositories": [
{
"type": "composer",
"url": "https://example.org",
"options": {
"ssl": {
"local_cert": "/home/composer/.ssl/composer.pem"
}
}
}
]
}
Regards.
There is a documentation page for this.
Composer will work with adding user names into the Satis URL. Works for me, I just wanted to get around the useless default passworded server in the local network. There's a read only account, and I used it.
Additionally: Every developer in the company has an account on the repository server, and there isn't much use in protecting downloaded ZIP files with even more security. Composer itself currently doesn't support any code signing methods or hash comparison, so there is no way to know if a package has been tampered with either where stored or during transmission.
According to the docs, not giving credentials in the URL will make Composer ask for them, or you can add them to auth.json. On the other hand: Saving clear text passwords in a dedicated file doesn't sound like the best idea, and transmitting them without using HTTPS is even worse.
You have to define what kind of security you want to have. What is the goal or the threat scenario you want to protect against?
The easy part...
Usually when migrating a ZF1 application from built-in auto-loading to composer based auto-loading (which is strongly recommended for deploying on CloudControls Pinky stack) you just need to take some simple steps:
Create a composer.json file and require Zend Framework (e.g. latest release from version 1.12) with:
{
"require" : {
"zendframework/zendframework1" : "1.12.*"
}
}
Install composer dependencies via CLI with:
composer install
Update your .gitignore file and add:
vendor/*
Recursively delete current ZF folder from your library path (e.g. ./library):
rm -r library/Zend
Include composer autoloader in your index.php before any usage of Zend_ classes by adding:
$loader = include 'vendor/autoload.php';
Remove every now obsolete ZF related require or require_once statements from your index.php - e.g. this is not needed anymore:
require_once 'Zend/Application.php';
Once you are done with the above changes you commit and push via git as normal and then you deploy the new version on CloudControl via CLI (where APP_NAME and DEP_NAME here refer to your app and deployment names):
cctrlapp APP_NAME/DEP_NAME deploy
You will notice that cctrlapp prints out some information about resolving composer dependencies and finally initiates the deployment of the new version. To check whether it is done you can run:
cctrlapp APP_NAME/DEP_NAME log deploy
Ok, deployment log looks fine – nice – let's open the browser!
What the f***! Internal server error? Why?? Everything worked well with the local LA(M)P stack!!!
The tricky part...
Fortunately CloudControl gives us access to the error logs as well ...
cctrlapp APP_NAME/DEP_NAME log error
Shouldn't be too hard to find out what's wrong here.
But ... err ...
8/1/14 5:23 AM error [error] [client ] FastCGI: incomplete headers (0 bytes) received from server "/app/php/box/php-fpm"
8/1/14 5:23 AM error [error] [client ] (104)Connection reset by peer: FastCGI: comm with server "/app/php/box/php-fpm" aborted: read failed
As the above error messages are not helpful at all we first need to track down this very bug. And this can indeed be tricky! We can google a bit. We can try something. We can then redeploy. We can google a bit more. We can try another thing. We can then redeploy again. We can google another time. We can try out every other thing. We can ... but do we want?
Luckily the Pinky stack offers another way to speed things up (which Luigi does not all). While it still incorporates cumbersome manual debugging at least we can save some time – go to your CLI and execute:
cctrlapp APP_NAME/DEP_NAME run bash
CloudControl now instantiates a new container for us and gives us SSH based shell access to it. As the documentation says everything should be as it is inside our deployment boxes:
The distributed nature of the cloudControl platform means it's not
possible to SSH into the actual server. Instead, we offer the run
command, that allows you to launch a new container and connect to that
via SSH.
The container is identical to the web or worker containers but starts
an SSH daemon instead of one of the Procfile commands. It's based on
the same stack image and deployment image and does also provides the
Add-on credentials.
Let's see if we can find out more (from inside the container):
cd code/public
php index.php
Hmm ... nothing reported here ... and ... nothing reported at the logs?! What the hell?!!
So, there seems to be a difference between the web and the run containers – and there is!
To find this out I started by editing the index.php right away:
vi index.php
And after a while I finally got to reproduce at least another error:
8/1/14 8:55 AM error [error] [client ] FastCGI: server "/app/php/box/php-fpm" stderr: PHP message: PHP Fatal error: require_once(): Failed opening required '' (include_path='/srv/www/code/vendor/zendframework/zendframework1/library:/srv/www/code/library:.:/usr/share/php') in /srv/www/code/vendor/zendframework/zendframework1/library/Zend/ ...
8/1/14 8:55 AM error [error] [client ] FastCGI: server "/app/php/box/php-fpm" stderr: PHP message: PHP Warning: require_once(/srv/www/code/vendor/zendframework/zendframework1/library): failed to open stream: No such file or directory in /srv/www/code/vendor/zendframework/zendframework1/library/Zend/ ...
Looks like some file is missing – maybe related to auto-loading – shouldn't be too hard to fix it.
But wait, what's that: Failed opening required ''? Sure you fail to require nothing you stupid code you!!
Well ... when looking at the respective ZF library files you won't find anything wrong there. The include paths seem to be correct as well – and yes, the files are present – composer managed both things correctly.
It's a PHP bug!
To be more precise it is a bug with PECL APC which affects Pinky's current versions of PHP 5.4.30 / APC 3.1.13 - see:
https://bugs.php.net/bug.php?id=62398
And that's exactly the difference between the run and the web containers as php.ini option apc.stat is set to 0 (off) for the web containers and 1 (on) for the run container.
tl;dr
Clone the CloudControl Pinky PHP buildpack from GitHub:
git clone https://github.com/cloudControl/buildpack-php.git
Copy all files from this repository and add them to your project root folder at:
.buildpack/php
Edit .buildpack/php/conf/php.ini and set:
apc.stat = 1
Commit, push, deploy and enjoy!
Notes:
Keep in mind that this is just a workaround as APC stat does not need to be activated in such an environment (where the stack is recreated on deploy) and it slows down execution. See the PHP docs:
On a production server where the script files rarely change, a
significant performance boost can be achieved by disabled stats.
Thanks:
Finally I'd like to thank Dimitris and Mateusz from CloudControl for general advises – though I need to find out on my own what was going on here. Furthermore I want to thank #BullfrogBlues and #Thierry_Marianne here at Stack Overflow who tried to answer another questioners thread dating back to November last year, which finally pointed me to look for APC related issues.
I have downloaded the zf2 skeleton application and after that,
WHile running
php composer.phar install the following error comes
[Composer\Downloader\TransportException]
The "http://packagist.org/p/zendframework/skeleton-application$65da2ae415c0
9e4b944964efe964f41b27e5b8bbe9cd7345515b4d2eea8ee5e6.json" file could not b
e downloaded: failed to open stream: HTTP request failed!
Please give me some advice
this is an open issue you can find it here :
github Composer issues 2624
Maybe some solutions in comments may help you.
Try :
Update composer.phar with
composer.phar self-update
First off, if you're running a web filter, especially K9 Web Protection, uninstall that first and retry. If the problem persists, read ahead:
The problem is that Composer downgrades to http requests after the first https request to the server. This is done to improve performance/speed and to ensure file integrity/security via the sha256 hashes. In any case, this will cause a 10053 error (errno=10053 An established connection was aborted
by the software in your host machine ... failed to open stream: HTTP request failed!) on some machines.
The reason this happens to some people and not others seems to be the manner in which your ISP handles http requests. In my case, they're re-routed through a caching proxy; which doesn't work well with the way Composer crafts its http requests. That's what happened with me - others may have a different cause. In any case, the fix is to force Composer to use https requests instead of http requests:
Add the following to your Composer installation's config file (composer.json). In Windows, you may find this file at C:\Users{Your Username}\AppData\Roaming\Composer.
"repositories": [
{
"packagist": false
},
{
"type": "composer",
"url": "https://packagist.org/"
}
],
Then go ahead and create your project again with the same command you had used: php composer.phar install. It should work now.
I would like to use the Dropbox-PHP API that has recently come under development again. It is located here: http://code.google.com/p/dropbox-php/
I did cloned it with hg clone https://dropbox-php.googlecode.com/hg/ dropbox-php and I get this file structure:
Dropbox/API.php
Dropbox/autoload.php
Dropbox/Exception/Forbidden.php
Dropbox/Exception/NotFound.php
Dropbox/Exception/OverQuota.php
Dropbox/Exception/RequestToken.php
Dropbox/Exception.php
Dropbox/OAuth/PEAR.php
Dropbox/OAuth/PHP.php
Dropbox/OAuth/Zend.php
Dropbox/OAuth.php
examples/accountinfo.php
examples/createaccount.php
examples/download_image.php
examples/getmetadata.php
examples/oauth_workflow.php
examples/uploading.php
But I get this error when trying to run accountinfo.php (or example):
Warning: include(Dropbox/autoload.php) [function.include]: failed to open stream
No such file or directory in dropbox-api/examples/accountinfo.php on line 7
Right, so then I move the Dropbox folder inside of where all the example files are and still get an error message:
Fatal error: Uncaught exception 'Dropbox_Exception' with message 'The OAuth class
could not be found! Did you install and enable the oauth extension?' in
examples/Dropbox/OAuth/PHP.php:36 Stack trace: #0 examples/accountinfo.php(9):
Dropbox_OAuth_PHP->__construct('', '') #1 {main} thrown in
examples/Dropbox/OAuth/PHP.php on line 36
So I'm obviously not doing something right but I have no idea what.
Also saw on the site where it has instructions on installing:
pear channel-discover pear.dropbox-php.com
pear install dropbox-php/Dropbox-alpha
I ran those two commands and it still won't work. I don't usually have any problems coding in PHP but the lack of documentation is a little frustrating.
Update
As noted in the accepted answer below my main problem was not having oAuth installed on the system. I'm running OS X 10.6 - if someone can provide some clear and easy instructions on how to build / install this to work with XAMPP / PHP 5.3 I will accept your answer. I've tried the articles online about using homebrew and such but these are flaky and do not seem to work for me. Guessing I will have to build / install it from scratch.
The Dropbox folder needs to be inside one of the folders in your include_path.
Edit:
Also oauth needs to be "installed" on the system and included in php.ini (when you do phpinfo() oAuth should show up as a module). then things should work.