I'm building a blog and I need to store images on my server. The images are given with full URL and when I post the article it saves the image with the copy() method.
In local all works well, but on my server nothing changes. However allow_url_fopen is set to On in php.ini file.
The strange thing is that the server changes the name of the file expected to be saved on, but it doesn't store it. For example I want to save this picture : http://s.tfou.fr/mmdia/i/03/8/bob-l-eponge-10479038eajyx.png . I put this URL inside my form, I submit it and then the server is saving this path : content/images/563b62825ab53.png as expected... But the path returns a 404 and the image is nowhere.
Here is my php code (cover is the name of the field where I put the image URL) :
$extension = explode('.', $_POST['cover']);
$uniq = uniqid();
$path = 'content/images/'.$uniq.'.'.$extension[count($extension)-1];
copy($_POST['cover'], $path);
$cover = $path;
here is what I have on the server with existing images pulled from my repo (from my local where all is working well) :
total 1328
-rw-r--r-- 1 root www-data 105331 Nov 5 14:59 563b578484ea1.jpg
-rw-r--r-- 1 root www-data 311132 Nov 5 14:59 563b57cf1db89.png
-rw-r--r-- 1 root www-data 132129 Nov 5 14:59 563b5a33d6c3b.jpg
-rw-r--r-- 1 root www-data 180274 Nov 5 14:59 563b5bbfe649b.jpeg
-rw-r--r-- 1 root www-data 283665 Nov 5 14:59 563b5c068e0bf.jpg
-rw-r--r-- 1 root www-data 311132 Nov 5 14:57 563b5fb480e73.png
But if I delete all and I try to make the server copy its own pictures, nothing appears... And here are the rights of the folders that contain the pictures :
drwxr-sr-x 3 root www-data 4096 Nov 5 14:57 content
which contains :
drwxr-sr-x 2 root www-data 4096 Nov 5 14:59 images
which contains nothing...
I don't know if it's a server issue or a lack in my php.ini or something else.
you need to give write permission to the user/group that nginx is running under (it shouldn't be root). chmod g+w images should do what you want assuming nginx is running as group www-data on your system
Related
I have Laravel(PHP) site which is running well on a localhost as well as Hostgator Linux shared server. The website allows users to make accounts and upload images and documents in following two directories:
var/www/html/public/contents/individual/project/images
var/www/html/public/contents/individual/project/docs
Now I have moved it to Ubuntu server at DigitalOcean. Here a user can upload a document but when he uploads an image, there is an error "[object Object]". Is this related to permissions.
A command "ls -l" gives me this information on permissions:
itsme#MyWebsite:/var/www/html/public/contents/individual/project$ ls -l
total 20
drwxrwxrwx 3 www-data www-data 4096 Aug 31 04:29 cover
drwxrwxrwx 2 www-data www-data 4096 Nov 27 01:22 docs
drwxrwxrwx 3 www-data www-data 12288 Nov 27 01:23 images
The directories "docs" and "images" have same permissions and located at same level. If "docs" is taking contents why "images" does not?
Can someone help in resolving this issue.
Thanks
Thanks alot every one for the input. Sorry my question was a bit unclear. But basically it was a problem with the ownership of the directories between /var and /images. www-data was supposed to be the owner of all the directories while in my case it was root as an owner. That is why the image plugin was showing [object Object] error/message. Though, I am still not sure, why "docs" folder was working and "images" not. But now I can upload contents in both folders.
It is a Drupal site, where includes/module.inc runs a loop over files in the registry and attempts require_once(). For a number of files this is failing, even though the file permissions are correct and the file should be readable.
I've added debug code to the loop to output to check file perms and contents:
// Debug code
print "$file perms:" . substr(sprintf('%o', fileperms($file)), -4) . "<br>";
print "$file contents:<br>" . htmlspecialchars(file_get_contents($file)) . "<hr>";
// Original Code
require_once $file;
It outputs the file permissions as well as well as the file contents before attempting the require_once. Different pages are failing on different files, the homepage for instance is outputting:
./sites/default/modules/cck/includes/content.token.inc perms:0755
./sites/default/modules/cck/includes/content.token.inc contents:
[filecontent]
./sites/default/modules/filefield/filefield.token.inc perms:0644
./sites/default/modules/filefield/filefield.token.inc contents:
[filecontent]
./sites/default/modules/getid3/getid3.install perms:0644
./sites/default/modules/getid3/getid3.install contents:
[NO FILE CONTENT]
So for some reason ./sites/default/modules/getid3/getid3.install allegedly has the permission to be readable, but isn’t.
Different paths show different files as being problematic:
/
./sites/default/modules/getid3/getid3.install perms:0644
/admin
./sites/default/modules/webform/components/date.inc perms:0644
/user
./sites/default/modules/cck/includes/content.crud.inc perms:0755
EDIT:
Note above that ./sites/default/modules/cck/includes/content.token.inc is readable but ./sites/default/modules/cck/includes/content.crud.inc gives error, here's the directory listing for those files (including --context to check for SELinux)
# ll --context
total 168
drwxr-xr-x 4 root root ? 4096 Sep 28 05:50 ./
drwxr-xr-x 8 root root ? 4096 Nov 6 2013 ../
-rwxr-xr-x 1 root root ? 72264 Nov 6 2013 content.admin.inc*
-rwxr-xr-x 1 root root ? 26307 Sep 28 03:13 content.crud.inc*
-rwxr-xr-x 1 root root ? 7181 Nov 6 2013 content.devel.inc*
-rwxr-xr-x 1 root root ? 3868 Nov 6 2013 content.diff.inc*
-rwxr-xr-x 1 root root ? 15914 Nov 6 2013 content.node_form.inc*
-rwxr-xr-x 1 root root ? 12550 Nov 6 2013 content.rules.inc*
-rwxr-xr-x 1 root root ? 6246 Nov 6 2013 content.token.inc*
drwxr-xr-x 3 root root ? 4096 Nov 6 2013 panels/
drwxr-xr-x 3 root root ? 4096 Nov 6 2013 views/
The modified date of crud is me commenting the code for testing after the errors occurred, but it is back to as it was now.
EDIT 2:
It seems that trying to access robots.txt directly is also forbidden. Not sure if this is the same problem, but again the file looks like it should be perfectly readable.
# ll robots.txt
-rw-r--r-- 1 6226 6226 1521 Aug 6 18:07 robots.txt
EDIT 3:
Looks like the problem was AppArmor, which I suppose is similar to SELinux. Changing from aa-enforce to aa-complain resolved the issue.
Perhaps some selinux cmd like might get this running :
semanage fcontext -a -t httpd_httpd_sys_content_t '/var/lib/myapp(/.*)?'
restorecon -R -v /var/lib/myapp
Where /var/lib/myapp is your ./ directory
Looks like the problem was AppArmor, which I suppose is similar to SELinux. Changing from aa-enforce to aa-complain resolved the issue.
I'm trying to create a folder in php and the code kind of fails each it is used with /tmp/... as path:
exec("mkdir -p /tmp/test/ 2>&1", $output, $return_code);
// $output is empty, $return_code is 0
//mkdir("/tmp/test/"); // Alternative to above
is_dir("/tmp/test/"); // returns true
is_readable("/tmp/test/"); // returns true
But if i check the /tmp-Folder there is no such directory and all subsequent write or read operations on the folder fail, because the folder does not exist. The permissions for /tmp are correct (root:root with 777) and i can do sudo -u http mkdir -p /tmp/test without problems. If I use tmp/test for example, the code will run fine and create a folder within the directory of the php-skript (Which lies in a folder which belongs to me, not the http-user ... )
Any ideas as to why php fails to create a folder under /tmp/ but reports it as being there?
Edit:
To specify read- and write-actions: Those actions are not from within my own script, but rather external skripts which get called by the php-script to execute different tasks. Once all of them succeeded, the folder gets zipped and copied somewhere else.
Edit:
Right after running exec("mkdir -p /tmp/testfolder");
[daishy#littlezombie tmp]$ pwd
/tmp
[daishy#littlezombie tmp]$ ls -al
insgesamt 8
drwxrwxrwt 21 root root 440 3. Aug 18:56 .
drwxr-xr-x 20 root root 4096 10. Jun 16:49 ..
drwxrwxrwt 2 root root 40 3. Aug 09:42 .font-unix
drwxr-xr-x 2 daishy users 60 3. Aug 14:40 hsperfdata_daishy
drwxrwxrwt 2 root root 60 3. Aug 09:42 .ICE-unix
drwx------ 2 daishy users 60 3. Aug 12:35 kde-daishy
drwx------ 2 daishy users 140 3. Aug 18:49 ksocket-daishy
drwx------ 3 root root 60 3. Aug 18:54 systemd-private-5rIfGj
drwx------ 3 root root 60 3. Aug 09:42 systemd-private-HGNW9x
drwx------ 3 root root 60 3. Aug 09:42 systemd-private-od4pyY
drwx------ 3 root root 60 3. Aug 09:42 systemd-private-qAH8UK
drwxrwxrwt 2 root root 40 3. Aug 09:42 .Test-unix
drwx------ 4 daishy users 80 3. Aug 16:55 .Trash-1000
-r--r--r-- 1 root root 11 3. Aug 09:42 .X0-lock
drwxrwxrwt 2 root root 60 3. Aug 09:42 .X11-unix
drwxrwxrwt 2 root root 40 3. Aug 09:42 .XIM-unix
Edit:
As it turns out, this is not a problem with php, but rather with systemd / apache. In short: systemd creates a private tmp-folder for apache while running, which resides under /tmp/systemd-private-XYZ. So the real /tmp is not viewable by the php-skript, but rather the private one.
See http://blog.oddbit.com/post/private-tmp-directories-in-fedora for more infos.
As it turns out, this is not a problem with php, but rather with systemd / apache. In short: systemd creates a private tmp-folder for apache while running, which resides under /tmp/systemd-private-XYZ. So the real /tmp is not viewable by the php-skript, but rather the private one.
To disable this behavior, you can set PrivateTmp=false in /usr/lib/systemd/system/httpd.service
See http://blog.oddbit.com/2012/11/05/fedora-private-tmp/ for more infos.
Don't do that. Use PHP's awesomely called function, tmpfile(). From the docs:
$temp = tmpfile();
fwrite($temp, "writing to tempfile");
fseek($temp, 0);
echo fread($temp, 1024);
fclose($temp); // this removes the file
currently I am trying to set up a virtual machine for development for a client. Three SVN repositories with PHP code have to be combined in one folder (I know it's ugly, but that's how they roll). I Googled a little and found mhddfs. So I checked out the three repositories in a folder called branches:
branches/branch1
branches/branch2
branches/branch3
I mounted the three branches with mhddfs at /mnt/dev. At the filesystem level, everything works as expected, so ls correctly displays the contents of all three folders (they are disjoint). However, trying to fire up the document root with Apache results in a 403 Forbidden error. I tried other locations than /mnt/dev as well, leading to no difference.
[Mon Feb 06 17:44:41 2012] [error] [client 192.168.56.1]
(13)Permission denied: access to / denied
When I do not mount the three folders but just put an index.php file into /mnt/dev, everything works as expected. Am I missing something?
Thanks for your help in advance.
EDIT: Some more data on the problem: When I create two directories, that are world-accessible ...
root#devbox:/tmp > ls -lha
drwxrwxrwt 6 root root 4,0K 6. Feb 20:11 .
drwxr-xr-x 21 root root 4,0K 6. Feb 10:07 ..
drwxrwxrwx 2 www-data vboxsf 4,0K 6. Feb 20:11 test1 # includes index.htm
drwxrwxrwx 2 www-data vboxsf 4,0K 6. Feb 20:13 test2 # includes index2.htm
... and mount them via mhddfs ...
mhddfs /tmp/test1,/tmp/test2 /mnt/dev
mhddfs: directory '/tmp/test1' added to list
mhddfs: directory '/tmp/test2' added to list
mhddfs: mount to: /mnt/dev
mhddfs: move size limit 4294967296 bytes
... ls behaves correctly ...
root#devbox:/tmp > ls -lh /mnt/dev/
insgesamt 8,0K
-rwxrwxrwx 1 www-data vboxsf 12 6. Feb 20:11 index2.htm
-rwxrwxrwx 1 www-data vboxsf 11 6. Feb 20:11 index.htm
... while Apache (user: www-data, group: vboxsf) doesn't and terminates with the 403 error stated above. However, if I unmount the folders and just put an index.htm in /mnt/dev, everything works as expected as Apache can read the file.
Any ideas?
All the best,
Martin
I encountered the same problem on Linux.
Following the steps below, I was able to solve it.
[STEPS]
Enable 'user_allow_other' in /etc/fuse.conf
Use mhddfs with '-o allow_other' option ex. mhddfs -o allow_other
/dir1,/dir2 /path/to/mount
I am trying to read and post back to the browser a file uploaded with the zend framework mechanism.
The file has been uploaded correctly to the desired location and as I have checked by
su www-data
and after an ls and a cat, the web user can read it and modify it properly.
the problem is that inside a controller when I try to:
if(!file_exists($fileName)) {
die("File ($fileName) wasnt set or it didnt exist");
}
I am always getting to die(...), although the $fileName is a string and when I display it's location I can always (as stated before) read it from the command line.
ls output:
$ ls -lah
total 112K
drwxr-xr-x 2 www-data www-data 4.0K 2009-10-07 18:21 .
drwxr-xr-x 3 www-data www-data 4.0K 2009-10-07 13:57 ..
-rw-r--r-- 1 www-data www-data 70K 2009-10-07 17:33 Eclipse_Icon_by_TZR_observer.png
-rw-r--r-- 1 www-data www-data 27K 2009-10-07 18:24 eclipse_logo2.png
Stat output:
stat() [function.stat]: stat failed for .../eclipse_logo2.png
I saw a very similar question to the "try for 30 days" site, so it is not something that has happened to me...
Any ideas?
You have to chmod the newly created file because the file owner created from PHP side will be Apache (group: www-data, httpd, www, or something similar). So next time PHP cannot access the file because www-data owns it and it has wrong permissions.
Here's how you create new files so that you can access them later.
<?php
$path = '/path/to/new/file';
touch($path)
chmod($path, 0777);
// TRY to change group, this usually fails
#chgrp($path, filegroup(__FILE__));
// TRY to change owner, this usually fails
#chown($path, fileowner(__FILE__));