Up until recently any error messages laravel would produce were written to /app/storage/logs on Azure, as they are supposed to. They still do locally, and my live server uses the exact same laravel-configuration. However, on my live server laravel stopped writing to the log files about 2 months ago.
Log::Info still works, but unless I tell it to write something to the logs it doesn't.
As it works locally and the exact same configuration is live, I don't know where I should start looking, and googling has revealed to be fruitless. I'm sorry I have not included any images or code, but I am completely clueless what could cause the error. Maybe it has something to do with writing privileges? Any ideas?
Progress 1: When something is supposed to be written to the logs-folder, the "Most recently modified at"-date changes to current date and time. However, nothing inside the folder changes.
Progress 2: An error was just printed to the log, but not all errors are printed. There is an imagick error that should be printed to the log that currently isn't, but the fact that another error message was printed changes the entire question. Just found out, going to test some stuff now.
Progress 3: I have confirmed that the only thing not printed to the log is when imagick fails to load a pdf. There are other cases of this particular problem of imagick happening on a windows server, where reading pdf:s causes imagick to crash without any error message. This means that this question is no longer relevant, thank you for all your time people.
You may check whether the disk space is full of your App Service, if so, it may raise your issue.
And every App Service pricing tier has a limit disk space and will shared in all the web apps in this App Service plan.
You can check the metric on dashboard page of your web app portal, e.g.
You can refer to https://azure.microsoft.com/en-us/pricing/details/app-service/ for details.
As a workaround, you can build a web job to run a script continuously to move your old log files to Azure Blob Storage or a database, to save storage space in App Service.
Related
I'm having this issue on my production server but not in local development:
Loading any page of the CMS (excluding secondary tabs) fires off a "success" notification, sometimes 5-6+ of them at the same time. I can not seem to track down where they're even coming from let alone what's causing them. I'm at such a loss I feel like I'm even having a tough time explaining it so I'll attach a screenshot.
Server is Cloudways PHP stack: 1GB RAM, Apache/Nginx
That's an issue that was already fixed. See this issue on github.
It only happens on SSL/HTTP2 servers, so that explains why you don't get this issue on your local dev environment.
You should be able to solve your problem by updating the CMS.
Before moving on I want to mention that I have tried to look for answers in the web but in vain.
I am tasked with investigating why our CakePHP-based website is no longer working on our staging server. When loading the site at times it loads completely, but when logging a user in it takes forever to authenticate such that it produces an Internal Server Error.
Step 1: I have checked the cache directory for both persistent and models and they are clean.
Step 2: The Configure::write value for debug is already set to 2. Nothing gets written on the error.log file.
Could this have anything to do with Session data? I am trying to figure out what's going on and I have tried looking into the lib folder for Cake to see if I can edit the files in there to actually see what the website actually outputs instead of the Internal Server Error message. Which file should I edit in there? I followed this link but it seems like the core is a different version.
It turned out that there was an issue with a SAP Server connection which kept the pages on the loop until they timeout resulting into an internal server error. This is evident of the fact that the live website was working fine.
I think the best solution would be to tweak the SAP Server connection component such that it doesn't timeout requests. As much as I lack knowledge of the technology. I have disabled the call and everything worked fine at a speed of light.
Error
I have a web app with a mass uploader (Plupload) for photos and when I upload say twenty photos, about six (around 30 %) will fail with an Internal Server Error. I have checked the Apache error.log for this domain and it has nothing new (I know I'm looking at the right error.log since older errors did show here).
This only happens on my VPS on Dreamhost (my hosting provider) servers while on my development server it runs silky smooth.
Oh, and things used to work just fine a month ago and then just started to fail. Back then I was using Uploadify and since that used Flash, it was impossible for me to debug where the upload failed.
Files and script
Uploaded files are photos, all about 100 kB big, even though I've successfully uploaded (and still can) 3 MB photos. My .htaccess naturally doesn't change during uploads. On the server side is a PHP script that uses GD2 library to move and resize the photo.
Server state
I have recently upgraded my VPS from 300 to 400 MB of RAM. This thing used to work and I upgraded it just so that memory is ruled out as a reason. Also my memory limit for PHP is at 200 MB, so this should sufice.
I am getting mighty frustrated that Dreamhost does not want to help, stating that "we can not be responsible for an error your code causes" and "We still will not be able to assist you in debugging the issue unfortunately."
It has been a week of sparse "support" while my app doesn't work and my clients are frustrated.
Questions
Is this kind of "You're on your own" support a standard across the
industry, i.e. would your host handle this differently?
How exactly can I debug this?
I'm going to assume that you have a standard Apache + PHP setup. One possible configuration is the pre-forked setup; in this case Apache will adapt to system load by forking more children of itself.
With only 400 MB of RAM you're pretty tight, so if you're running 20 processes that each take 200MB (assuming every process handles pretty big files using GD) you're getting into some hot waters with the memory manager.
I would reduce the total number of instances to 2 first to see how this will go; also keep an eye on the memory usage by running top.
Regardless, it might be beneficial for you to run a separate task manager such as Gearman to perform resize tasks so that the upload only has to focus on moving the uploaded file and run the resize task; this way you can greatly reduce the memory required to run your PHP instances.
As to your Q1: the simple answer is that you get what you pay for. A 300Mb RAM Dreamhost VPS costs ~$360 per annum. For this you get the VPS service and responses on service failure relating to the provision of the virtual environment. The OS, the software stack and the applications are outside this service scope. Why? This sort of custom knowledge-base support could cost $50-300 per hour. You are being unreasonable and deluding yourself if you expect Dreamhost to provide such services pro-bono. That's what sites like this one do.
So my suggestion is that you suck up that anger and frustration and work out how to help yourself.
As to your Q2. (i) You need to understand where your Apache errors go to; (ii) Ditto any SQL errors if you are using a D/B. (iii) You need to ensure that PHP error logging is enable and verify where the PHP logs are going to. (iv) You need to inspect those logs, and verify that logging is working correctly, by using a small script which generates runtime errors.
You should also consider using enhanced facilities such as php_xdebug to enhance logging levels and introducing application logging.
In my experience systems and functions rarely die silently. However, applications programmers often ignore return statuses, etc. For example in the GD library, imagecopyresized() can fail and it returns a status code to tell the application when it has, but if the application doesn't test this status and act accordingly then it can end up going down bizarre execution paths silently, and just appear to the user (or developer) as "it just stopped working".
My last comment is that you should really consider setting up a private VPS within your development environment which mirrors your Dreamhost production config, and use this for integration, acceptance test and support. This is pretty easy to do and you can mess this and add debug / what if options and then roll back without polluting your production environment. Tools like VMare Appliances and VirtualBox make this easy. See this blog post for a description of how I did this for my hosted service.
trying to respond the question 2: if you checked all your code and you didn't see any bug, I thing that the best thing that you can do is to check the version of all the programs running on the server (apache, php, ...), e.g., I remember that I had a problem with a web service it was running on apache and php, the php version was 5.2.8, and after a lot of investigation I found out that that version had a problem parsing xml data.
Regarding the first part of the question: Dreamhost do offer a paid support service with "call back". We used this once to get the low down on something. They are very good with general support (better than many hosts IMO) but you can't expect dedicated service, and they must handle a lot of piddling questions. But pay for a call back and, in about 2 minutes on the phone, you can get the answer you want, plus they get their $10 (recurring) for the time. You both win. Just remember to cancel the recurring charges.
Regarding the second part of the question, we had this very same issue with them. Their response (as suggested by Linus in the comments) was that they keep a tally of the CPU use of all processes used by your "user". If that total exceeds a threshold, they will simply kill the process(es) to get the cycles down. No error messages, no warnings, no nothings. Processes can include MySQL, CGI (perl) or PHP. No way to monitor or predict, and we couldn't program round it. Solution... not DreamHost, unfortunately. (webhostingtalk.com will give you loads of host ideas). So we use for some sites, but not for others.
I have a hosted server (rochenhost.com), where I run some PHP code.
In the old days, before I started working as a software developer, and was self taught I printed the variables out.
Now after some years of school and a developer job, and after I have learned to use debuggers, I wounder: Are there any good debugging tools for PHP code, running on a hosted server?
Is the "hosted code" you're working on directly on your production server? Or do you have two separate codebases, one for development (debugging and such) and another for production (displaying to your actual users)? As you probably know, changing code directly on your production server is kind of insane and is almost guaranteed to occasionally bring your site down or create security holes. So my biggest piece of advice would be to get a local development server. This can be as easy as downloading the appropriate XAMP stack for your computer and using your favorite VCS to sync files with the production server once you've debugged them.
Once you have a local development server, check out this question for a list of debuggers with step-through functionality and also this one for a larger list of IDEs available on different platforms.
If you are stuck debugging code on a remote server, here are a couple of other practices that can help. You may already be doing them.
1) Turn on error output. You can do this for a particular script by inserting the following lines at the beginning:
ini_set("display_errors","1");
error_reporting(E-ALL);
This will print (sometimes) informative error messages to the page. It is considered a major security risk to expose this information to visitors, so make sure you remove these lines when you're done testing. If you have a local development server or one that's not accessible to the outside world, you can turn on error reporting for all pages by adding the line display errors = 1 to php.ini.
2) Locate your server's PHP error log. This often contains information about why a page died, even when you're not able to load enough of the page for PHP to display error messages there. You can also use the command error_log('your message here') to print a message to the log, which is useful when you can't just dump the info on your page.
I use the FirePHP extension for FireFox and ChromePhp for Chrome. They put log messages in the console log of the browsers. They have save me hours of debugging time.
Hey folks, this question can't be too complicated. Please provide a solution to at least figure out the ultimate root cause of the problem.
I currently write an application, which controls Excel through COM: The app creates a COM-based Excel instance, opens some XLS files and reads their contents.
Scenario I
On Windows 7, I start Apache and mySQL using xmapp-control with system administrator rights. All works as expected. The PHP-based controller script interacts with Excel as expected.
Scenario II
A problem appears, if I start Apache and mySQL as 'background jobs'. Here is how:
I created two jobs using Windows 7 Task Planner. One runs apache_start.bat, the other runs mysql_start.bat.
Both tasks run as SYSTEM with elevated privileges when Windows 7 boots.
Apache and mySQL work as expected. Specifically, Apache serves HTTP request from clients and PHP is able to talk to mySQL.
When I call the PHP controller, which calls and interacts with Excel using COM, I do receive an error.
The error message comes from Excel [not COM itself] and reads like this:
Excel can't read the specified Excel-file
Excel failed to save the file due to an ill-name worksheet
Interestingly, the first during the first run of the PHP-based controller script, it takes a few seconds to render the error message. Each subsequent run immediately renders the error message.
Windows system logs didn't show a single problem report entry.
Note, that the PHP program and the Apache instance didn't change - except the way Apache was started.
At least the PHP controller script is perfectly able to read the file-system, since it provides the pathes to the XLS-file through scandir() of a certain directory.
Concurrency issues can't be the cause of the problem. A single instance of the specific PHP controller interacts with Excel.
Question
Could someone provide details, why this happens? Or provide ways to isolate the ultimate cause of the problem (e.g. by means of a PowerShell 2 script)?
UPDATE-1 :: 2011-11-29
As proposed, I switched the Task Planner job from SYSTEM to a conventional user. Works. Apache and MySQL get started and process request.
Unfortunately, the situation regarding Excel did't change a bit. Still, I see the error.
As assumed earlier, the EXCEL COM server starts. I'm able to change various settings (e.g. suppress dialogs) without a problem through the COM-instance.
The problem happens while calling this:
$excelComObject->Workbooks->Open( 'PathToXLSFile' );
UPDATE-2 :: 2011-11-30
Added the accounts USER, GUEST and EVERYONE with the READABLE right to the access control list of the XLS file . No change.
Modified the app in such a way, that the PHP part creates a copy of the XLS file as a temporary file and moves the contents of the original file into this. Just to ensure, that the problem isn't forced by odd file / path names.
Still, the problem persists.
UPDATE-2 :: 2011-12-05
I'm going to send the EXCEL COM-Server methods in such a way, that Excel creates a blank file and saves it to /tmp. Let's see, if Excel even isn't able to read this file.
Go into the task planner and let everything run as a local user. This will probably require that you enter a password so create one if you don't have one already.
Excel is a user-level application that shouldn't run as SYSTEM. I'm sure there are ways around it, but you should simply let everything run at the correct level.
Having Apache run on the user level isn't a problem.
Try creating the following directories:
C:\Windows\SysWOW64\Config\Systemprofile\Desktop
C:\Windows\System32\Config\Systemprofile\Desktop
it worked for me :-)
http://social.msdn.microsoft.com/Forums/en-US/innovateonoffice/thread/b81a3c4e-62db-488b-af06-44421818ef91
In the past (read: pre Vista) services had an option called "Allow Service to interact with desktop" which allowed services to spawn windows etc. Starting Vista, this is no longer allowed.
I suspect Excel is failing because it can't function under this restriction. Thus, any attempt to run it as a service in your Win7 installation will fail.
You can Windows XP and allow desktop interaction for your Apache process, which I don't really recommend for obvious reasons.
Another approach I would take is to create a PHP script that runs as a regular process and listens on a socket in an infinite loop. Your PHP script that runs under Apache would communicate with the secondary script through the local socket and have the secondary script spawn Excel.
This may sound complicated but in fact it's not a lot of code and it fixes a problem you will soon have anyway: You should only have one instance of Excel running or you may run into problems. The secondary script could queue requests, handing off one by one to Excel then taking the next in the queue.