I have tried everything.
I recently tried to deploy a new Laravel application on my Ubuntu Digital Ocean droplet. I have done this many times in the past and have done nothing out of the ordinary with this project. I'm using https://gist.github.com/jamieshepherd/50419bb148a4f43e8266 this as a template nginx configuration, which I base all of my sites off. However, this time I deployed and went to my domain, I was served another client's site. The site had no images, styles, scripts etc. - but sure enough this was the wrong site. Weirdly, I could go to /images/example.jpg and get the correct file which would be in /public/images/example.jpg - but the actual routes were being served from a completely different folder.
There is absolutely no reference to the other client in my project anywhere, I feel like it has something to do with Laravel or PHP5-FPM setting some kind of root directory to the other folder.
I have gone to /public/index.php and just echo and died a message, and this works fine. When I revert the change and let the application run it again displays the other client's application. Here's a list of other things I've tried.
Temporarily renaming the other client's folder, as expected, NewSite throws a 404 error
Temporarily disabling the other client's nginx block, NewSite continues to serve the other client's site (remember, not css/images/scripts, weirdly)
Reinstalling PHP5-FPM
Printing the document_root, everything looks normal (/web/NewSite/public)
Trying to run the site off a different port
Pulled a working Laravel application from a completely different server, put it on this server, set the domain to point to this working application, serves the other client's site
Grep'd for any references to the client site in any of the project folders just to check I wasn't going insane, confirmed I'm not insane, though after all this I'm not sure anymore
Really I'm lost, I have no idea how to debug this any more.
Site trying to deploy: http://paragon.gg
Client site view that it's servring: https://swellhunter.co.uk
Client site NGINX conf: https://gist.github.com/jamieshepherd/4ff5430ddb13ed04f22c
Edit: Update: To make matters even more bizarre. I have temporarily removed swellhunter from sites-enabled, visited paragon.gg, still serves swellhunter's index. WHAT? There is absolutely no reference to the site at all now, I've even moved the paragongg folder to a completely different place to check. How on earth is it even finding the view with no reference to go on?
Seems like an issue with Laravel rather than with nginx or with your VPS.
Since you pointed out simple PHP file is pointing out rightly.
SSH to your droplet and run you laravel app using php artisan serve.
And access your droplet on port:3000
If the problem persists you can rule out server issues and can concentrate on debugging laravel app. Maybe be there is something wrong with you laravel app itself, some unwanted changes etc. Did you verify your laravel App?
Can you paste an LS -l of the directory? Have you checked to see what you have in the Laravel app/config/app.php for url ? I just took a look at paragon.gg and put https://paragon.gg it gave me a cert error but after I accepted it the site looked fine. Is that not the result you wanted?
Related
I've made an Heroku app with latest Drupal 8.
After it, I've deployed it and configured the installation successfully.
Now I'm getting the problem where Drupal automatically redirects to the installation page when I open my application after 1-2 hours.
I feel that there's something to do with dynos.
And yes, I have a free account.
I've already tried searching a lot on google, but all the guides are outdated which are very complicated/do not make any sense.
The installation page of the Drupal should not come again and again. instead of it, the installation should be permanent.
If drupal is showing you install page that means that it didn't find the (old) database.
So either your database account data is not correct for new environment so it can't connect to database server ( sites/default/settings.php ) or something is wrong with your database (wiped out by heroku?). Check on database (with PHP MyAdmin or some similar tool maybe) to see do you have tables and data inside them.
Heroku has an ephemeral filesystem and Drupal creates/modifies some files during installation such as: "settings.php".
ephemeral file system means that all the modifications to the files/file system will be lost upon the dynos restarting.
So, as soon as Dynos do restart, the files are revert back to their original state.
Due to this, all the changes are wiped up and thus when you open the site/application, the drupal can't detect the installation and thats why it redirects to the installation page.
It all happens because the file changes do not persist if the file system is an ephemeral one.
as #ceejayoz suggested, please see this article for a possible work around by which you can fix this problem:
https://www.fomfus.com/articles/how-to-create-a-drupal-8-project-for-heroku-part-1
So I'm working from a laptop, and I'm developing a PHP application. Being allowed to work from home, I installed EasyPHP on the laptop so I can host the web application locally on the laptop and work even when not connected to the internet, and without the use for a VPN connection, and even for when I'm at work, no need to mess with FileZilla or whatever, everthing will be so much easier like that for me.
My problem is the following: from our dev server, at work, everything is good. I copied the whole code in my easyphp folder, and I make a duplicate of my database in my local MySQL server. Running the application on my work's dev server, everything is fine. But locally, I get the following error:
Fatal error: Class 'ConfigNamespace' not found in C:\Program Files (x86)\EasyPHP-DevServer-14.1VC11\data\localweb\MyApplication\libs\config\ConfigParser.php on line 50
So my ConfigParser class calls a New ConfigNamespace() at line 50... The local server doesn't like it. The ConfigNamespace class sits next to the ConfigParser class, in the \libs\config folder. Why does t not work locally, but works on the dev server?
I am under the impression that the configuration of the servers are different, and that my local server decides to expect all the classes to be in the root folder... but it's not!
I've searched on the web but only found things regarding system classes being unreachable... nothing that really has anything to do with my problem.
Thanks,
I fixed it. It ended up being a problem with rights on the folders after all. I did give it all the rights, but windows somehow still blocked it... I changed my document root for another folder, not in program files, and now everything works!
I recently started using the Laravel framework and i set up a little workspace for me to work in. I setup a Xubuntu client at home that runs a Apache server from which i program and host my Laravel instance from. Now recently I've run into a little bit of a problem. I was following a tutorial series and everything went fine, the next day however i run into this error http://puu.sh/nh5vQ/8ae11c52ab.png
Both from opening the /var/www/html/test/public directory in apache2 from my webbrowser or from running php artisan and actually hosting the laravel instance.
So the first thing that came to mind was permissions, but the permissions for the public folder are fine, It should be able to read or execute as any program. http://puu.sh/nh5Ho/06e061a056.png
Mostly other posts can reference to a bit of code where they specified the wrong filepath. What i find odd is that when i open my hosted instance of apache server in my webbrowser, where you can technically browse all the files in the hosted directory it is there in the list, but when i click it it pops up with that same error.
I checked if all the files that the server is trying to acces exist, and they do exist and i made no typo's.
I'm going to continue on looking and i shall post the solution as soon as i find one, though I'm hoping you guys have an idea since I've been looking for about 2 days to find a solution.
Thanks for your time
I Decided I'd have to start a new Laravel project to fix this without making a chaoticly configured project.
I work on a php website that deploys to our servers with capistrano. The site has a very large media folder that needs to be moved into the current deploy every time. Currently what happens is I have a shell script that does a 'mount --bind' every time it deploys. And on the before deploy it does an 'unmount' of that folder. The problem is that it isn't reliable and sometimes on cleanup it rms my media folder. I thought about putting the media folder in the github, but it is 500 mbs and needs to change as users create accounts.
So options I have thought of and want your opinions on, or your options that is better then I can think of.
An .htaccess rewrite that whenever it looks for the media folder it reroutes it to a subdomain that has the folder on it. I just don't know if this rewrite will work for creating files and directories or only reading them.
I tested this today and it worked, for reading from the subdomain, but I could not get the create or write to work
Find a beter way to deploy the media folder without having to rely on shell commands that seem to fail and destroy the folder
Restarting the server on every deploy (this unmounts all folders) then just do the mount after the deploy and server restarted. This is just time consuming if I am doing many deploys it wouldn't work well because I have staging and production on the same server, so if I am testing on staging, restart it, it also restarts my production site, thus unmounting production well I am on staging testing.
Those are the only options I could think of. Any help would be great.
I have a bizarre problem with WinCache and it's possible this isn't the best forum. I checked SuperUser but there are 0 results regarding WinCache. Support seems to be weak throughout the web but at least SO has discussed it before.
I have a live and a development site on the same server (CakePHP framework, Windows Server 2008 R2, IIS 7.5, PHP 5.3, FastCGI, WinCache 1.1). Live site is on port 80 and dev on 81. IIS has two web sites, each pointing to a different root folder in Inetpub\wwwroot. Each site is run under a different application pool.
Some time after a change to the dev site the live site started erroring. After some painful debugging I found out that the following (totally lame) chain of events occurs:
Live PHP script loads as normal following GET request
Live PHP script POSTs to itself as normal following user selection
Dev PHP script handles POST request and errors when form parameters are not as expected.
If I inspect the WinCache properties through the web interface under port 80 (live site) I see the document root is Inetpub\wwwroot\Live - as expected. However, if I review the list of cached files under the file cache tab I see some files cached twice. Mouse-over tells me that one is from Inetpub\wwwroot\Dev and the other from Inetpub\wwwroot\Live.
How is this possible? Why is IIS using one in a GET request and another in POST? I know that CakePHP chooses which files to load based on a formatted URL, but the index.php file which handles routing is within Inetpub\wwwroot\Live, so presumably could never request a file in a different doc root. It seems to me that PHP asks IIS for a file, but IIS is losing its marbles when communicating with WinCache.
So far I can't see any way to disable WinCache for one site, but even if I could I probably wouldn't trust it anymore.
Any suggestions on this vexing issue will be much appreciated.
I have no Problems.
Try this:
Use Wincache 1.2 (Beta)
Dont use "." in the AppPool Name.
(Opt.) Create a ProcessUser for each AppPool.
Look for session files called "wincache_session_*" in the our session directory.
Copy the wincache.php from "[AppDir 86]\IIS[Wincache]\wincache.php" in each webdirectory
and open it over http.