I have two cpanel websites running the same php script that parses a csv file, one on godaddy servers (site A), one on hostgator (site B). on site B, the html portion of the page renders and then it parses the csv, while on site A it parses the csv and then renders the html portion.
Any ideas why there is such different behavior? I would like to have Site A perform this script like Site B
They are both running php 5.4 and have the same execution time limits.
Below are the result of array_diff_assoc of php.ini arrays of name => local_value. The first is array_diff_assoc(Site B, Site A) the second is array_diff_assoc(Site A, Site B)
Array
(
[allow_url_include] => 1
[date.timezone] => America/Chicago
[disable_functions] => dl
[enable_dl] =>
[error_reporting] => 22519
[expose_php] =>
[extension_dir] => /opt/php54/lib/php/extensions/no-debug-non-zts-20100525
[include_path] => .:/opt/php54/lib/php
[intl.default_locale] =>
[intl.error_level] => 0
[max_execution_time] => 30
[memory_limit] => 256M
[mssql.allow_persistent] => 1
[mssql.batchsize] => 0
[mssql.charset] =>
[mssql.compatability_mode] =>
[mssql.connect_timeout] => 5
[mssql.datetimeconvert] => 1
[mssql.max_links] => -1
[mssql.max_persistent] => -1
[mssql.max_procs] => -1
[mssql.min_error_severity] => 10
[mssql.min_message_severity] => 10
[mssql.secure_connection] =>
[mssql.textlimit] => -1
[mssql.textsize] => -1
[mssql.timeout] => 60
[mysql.allow_persistent] =>
[odbc.allow_persistent] =>
[odbc.check_persistent] =>
[odbc.default_cursortype] => 3
[odbc.default_db] =>
[odbc.default_pw] =>
[odbc.default_user] =>
[odbc.defaultbinmode] => 1
[odbc.defaultlrl] => 4096
[odbc.max_links] => -1
[odbc.max_persistent] => -1
[pcre.backtrack_limit] => 1000000
[pcre.recursion_limit] => 100000
[post_max_size] => 64M
[sourceguardian.restrict_unencoded] => 0
[upload_max_filesize] => 64M
[xsl.security_prefs] => 44
)
Array
(
[allow_url_include] => 0
[apc.cache_by_default] => 1
[apc.canonicalize] => 1
[apc.coredump_unmap] => 0
[apc.enable_cli] => 0
[apc.enabled] => 1
[apc.file_md5] => 0
[apc.file_update_protection] => 2
[apc.filters] =>
[apc.gc_ttl] => 3600
[apc.include_once_override] => 0
[apc.lazy_classes] => 0
[apc.lazy_functions] => 0
[apc.max_file_size] => 1M
[apc.mmap_file_mask] =>
[apc.num_files_hint] => 1000
[apc.preload_path] =>
[apc.report_autofilter] => 0
[apc.rfc1867] => 0
[apc.rfc1867_freq] => 0
[apc.rfc1867_name] => APC_UPLOAD_PROGRESS
[apc.rfc1867_prefix] => upload_
[apc.rfc1867_ttl] => 3600
[apc.serializer] => default
[apc.shm_segments] => 1
[apc.shm_size] => 32M
[apc.shm_strings_buffer] => 4M
[apc.slam_defense] => 1
[apc.stat] => 1
[apc.stat_ctime] => 0
[apc.ttl] => 0
[apc.use_request_time] => 1
[apc.user_entries_hint] => 4096
[apc.user_ttl] => 0
[apc.write_lock] => 1
[date.timezone] => UTC
[disable_functions] =>
[enable_dl] => 1
[error_reporting] => 1
[expose_php] => 1
[extension_dir] => /usr/local/lib/php/extensions/no-debug-non-zts-20100525
[include_path] => .:/usr/lib/php:/usr/local/lib/php
[max_execution_time] => 120
[memory_limit] => 64M
[mysql.allow_persistent] => 1
[mysqlnd.collect_memory_statistics] => 0
[mysqlnd.collect_statistics] => 1
[mysqlnd.debug] =>
[mysqlnd.log_mask] => 0
[mysqlnd.mempool_default_size] => 16000
[mysqlnd.net_cmd_buffer_size] => 4096
[mysqlnd.net_read_buffer_size] => 32768
[mysqlnd.net_read_timeout] => 31536000
[pcre.backtrack_limit] => 10000000
[pcre.recursion_limit] => 10000000
[post_max_size] => 48M
[upload_max_filesize] => 32M
)
Update 1
In comparing ini_get_all, I have found that Site A has APC enabled where Site B does not... could this be the issue? Is there any harm in this implementation in disabling APC?
Update 2
I believe we can rule out implicit_flush, as both are set to false and setting Site A's to true does not change the behavior
Update 3
Included the differences in ini files
Update 4
I have set Site A's php.ini file to be the same as site B's, with no change in behavior, so maybe we can rule this out?
Update 5
Although it does not mimic the async nature of site B, using
ob_flush();
flush();
to flush the buffer seems to get it very close, but it is not a very clean solution...
Related
I'm using Apache/2.4.29 with PHP 7.4.5 as module.
Apache and PHP are configured with lito as user.
Virtualhost configuration with PHP settings:
php_admin_value opcache.enabled 1
php_admin_value opcache.preload /home/lito/www/preload.php
php_admin_value opcache.preload_user lito
phpinfo show opcache configuration:
opcache.enable On
opcache.preload /home/lito/www/preload.php
opcache.preload_user lito
No errors on apache log.
After apache restart opcache_get_status haven't any key related with preload status (preload_statistics) and there are only one script preloaded (current phpinfo.php):
Array
(
[opcache_enabled] => 1
[cache_full] =>
[restart_pending] =>
[restart_in_progress] =>
[memory_usage] => Array
(
[used_memory] => 9168872
[free_memory] => 125047784
[wasted_memory] => 1072
[current_wasted_percentage] => 0.00079870223999023
)
[interned_strings_usage] => Array
(
[buffer_size] => 6291008
[used_memory] => 522888
[free_memory] => 5768120
[number_of_strings] => 10969
)
[opcache_statistics] => Array
(
[num_cached_scripts] => 1
[num_cached_keys] => 1
[max_cached_keys] => 16229
[hits] => 1
[start_time] => 1587378881
[last_restart_time] => 0
[oom_restarts] => 0
[hash_restarts] => 0
[manual_restarts] => 0
[misses] => 3
[blacklist_misses] => 0
[blacklist_miss_ratio] => 0
[opcache_hit_rate] => 25
)
[scripts] => Array
(
[/home/lito/www/phpinfo.php] => Array
(
[full_path] => /home/lito/www/phpinfo.php
[hits] => 0
[memory_consumption] => 1040
[last_used] => Mon Apr 20 12:39:15 2020
[last_used_timestamp] => 1587379155
[timestamp] => 1587378955
)
)
)
Is not available opcache preload on Apache with PHP as module (without a custom php.ini)?
Thanks!
UPDATE: Tested adding preload file on /etc/php/7.4/apache2/php.ini and it works fine.
Similar (for nginx with php-fpm): https://bugs.php.net/bug.php?id=79043#1578412872
As far as I can tell, it looks like preloading is not in effect when configured this way.
Answer from PHP core dev:
yes, as preloading happens during early server startup, enabling it through php_admin_value does not work.
I trying to define an array containing all months from a certain date range (like 2015-03-01 and 2017-03-01).
The result I'm looking for is:
Array
(
['2015'] => Array
(
['03'] => 0
['04'] => 0
['05'] => 0
['06'] => 0
['07'] => 0
['08'] => 0
['09'] => 0
['10'] => 0
['11'] => 0
['12'] => 0
)
['2016'] => Array
(
['01'] => 0
['02'] => 0
['03'] => 0
['04'] => 0
['05'] => 0
['06'] => 0
['07'] => 0
['08'] => 0
['09'] => 0
['10'] => 0
['11'] => 0
['12'] => 0
)
['2017'] => Array
(
['01'] => 0
['02'] => 0
['03'] => 0
)
)
Which way would be the best way to do it?
Note: This is a dummy example, just looking for the best practice.
Some simple things to start off with
use 32-bit int instead of strings
make the entire things with the years, months and date on Continuos memory (very good for caching)
if possible make it matrix-free, that means you know a year has 12 months; so why bother reserving memory for that redundant information?
I'm on a Drupal7 site, and I'm not used to Drupal. When I edit a node (standard page), and try to save it, the menu disappears. Not all node's are like this, only the ones that uses a field group of heatmaps, probably a custom field group (legacy).
System specs are:
CentOS 6.6
Apache 2.2
Mysql 5.5
Php 7
At first, I thought it was a bug in Drupal 7, and I tried solutions as Menu items disappearing in Drupal 7. But the suggested solutions didn't work. So I started to suspect post_max_size or memory_limit, because the form with the custom field grows very large when it uses the field or the field group. So I've maxed the memory settings and it looks good but is still not working.
The field group array tends to be very large and I tried to find some info about nestling level too big in a post but found no hints.
The post size is:
post_max_size in bytes = 536870912
post CONTENT_LENGTH = 1020347
The field group contains Heatmaps with Geolocations and endless of data:
[field_heatmap_data] => Array
(
[und] => Array
(
[0] => Array
(
[tablefield] => Array
(
[cell_0_0] => X
[cell_0_1] => Y
[cell_0_2] => Plastic
[cell_0_3] => Paper
[cell_0_4] => Glass
[cell_0_5] => Metal
[cell_0_6] => Organiskt
[cell_0_7] =>
[cell_0_8] =>
[cell_0_9] => Other
[cell_1_0] => 14.1741233638
[cell_1_1] => 57.7797089972
[cell_1_2] => 0
[cell_1_3] => 0
[cell_1_4] =>
[cell_1_5] =>
[cell_1_6] =>
[cell_1_7] => 1
[cell_1_8] =>
[cell_1_9] => 2
[cell_2_0] => 14.1784435935
[cell_2_1] => 57.7797106709
[cell_2_2] => 0
[cell_2_3] => 0
[cell_2_4] =>
[cell_2_5] =>
[cell_2_6] =>
[cell_2_7] =>
[cell_2_8] =>
[cell_2_9] =>
[cell_3_0] => 14.1656472109
[cell_3_1] => 57.7831198751
[cell_3_2] => 1
[cell_3_3] => 2
[cell_3_4] => 1
[cell_3_5] => 1
[cell_3_6] =>
[cell_3_7] =>
[cell_3_8] =>
[cell_3_9] =>
[cell_4_0] => 14.1753179083
[cell_4_1] => 57.7826699822
[cell_4_2] => 0
[cell_4_3] => 5
[cell_4_4] =>
[cell_4_5] => 3
[cell_4_6] =>
[cell_4_7] => 9
[cell_4_8] => 4
[cell_4_9] =>
[cell_5_0] => 14.1602465906
[cell_5_1] => 57.7824661754
[cell_5_2] => 2
[cell_5_3] => 0
[cell_5_4] => 1
[cell_5_5] =>
[cell_5_6] =>
[cell_5_7] => 4
[cell_5_8] =>
[cell_5_9] => 1
[cell_6_0] => 14.1552312791
[cell_6_1] => 57.7788985858
[cell_6_2] => 0
[cell_6_3] => 1
[cell_6_4] =>
[cell_6_5] => 1
[cell_6_6] =>
[cell_6_7] => 4
[cell_6_8] =>
[cell_6_9] =>
[cell_7_0] => 14.1631063952
[cell_7_1] => 57.7813178687
[cell_7_2] => 1
[cell_7_3] => 0
[cell_7_4] =>
[cell_7_5] =>
[cell_7_6] =>
[cell_7_7] => 2
[cell_7_8] => 3
[cell_7_9] =>
[cell_8_0] => 14.1742044644
[cell_8_1] => 57.7827544419
[cell_8_2] => 0
[cell_8_3] => 0
[cell_8_4] =>
[cell_8_5] =>
[cell_8_6] =>
[cell_8_7] => 4
[cell_8_8] => 1
[cell_8_9] =>
[cell_9_0] => 14.157952438
[cell_9_1] => 57.7818974962
[cell_9_2] => 2
[cell_9_3] => 4
[cell_9_4] => 5
[cell_9_5] => 1
[cell_9_6] =>
[cell_9_7] => 8
[cell_9_8] => 2
[cell_9_9] =>
[cell_10_0] => 14.1706946744
[cell_10_1] => 57.7815507326
[cell_10_2] => 0
[cell_10_3] => 0
[cell_10_4] =>
And so on....
So I've figured out that there is a flaw in the architecture of the node because it cant clearly not handle that much data in a field group and should been handled as an separate node, but since this is a legacy project I don't want to mess things up.
If I var_dump the $_POST variable on different pages when editing, I can clearly see that the $_POST variable stops after the $_POST['field_heatmap'] element where there is data, while the pages that doesn't contain data in that field group the $_POST array continues after the $_POST['field_heatmap'] element.
So my question is, should I continue to try to find a bug in Drupal or should I investigate further some php settings (Or maybe apache). I've tried debugging with cachegrind but can't find any unusual. Or any hints are greatly appreciated!
Finally! The max_input_vars was set to 1000
Changed it to max_input_vars = 10000 and it worked!
I'm in a bind with a deadline and I cannot seem to figure this out.
I am trying to query the table to get the values for the corresponding max shipdate. My query is below. This is for Fox Pro using an ODBC driver.
SELECT
so1.sono,
so1.custno,
so1.item,
so1.shipdate as last_shipdate,
so1.price as last_price
FROM sotran01 so1
INNER JOIN (
SELECT
custno,
item,
MAX(shipdate) as last_shipdate
FROM sotran01
WHERE shipdate >= {d'2013-05-23'}
AND shipdate <= {d'2014-05-23'}
GROUP BY custno, item
) so2 ON (so1.custno = so2.custno AND so1.item = so2.item AND so1.shipdate = so2.last_shipdate)
WHERE so1.item IN (
SELECT item
FROM arpric01
)
ORDER BY so1.custno, so1.item, so1.shipdate
This is what I get (using ADOdb):
ADODB_vfp Object
(
[databaseType] => vfp
[fmtDate] => {^Y-m-d}
[fmtTimeStamp] => {^Y-m-d, h:i:sA}
[replaceQuote] => '+chr(39)+'
[true] => .T.
[false] => .F.
[hasTop] => top
[_bindInputArray] =>
[sysTimeStamp] => datetime()
[sysDate] => date()
[ansiOuter] => 1
[hasTransactions] =>
[curmode] =>
[dataProvider] => odbc
[hasAffectedRows] => 1
[binmode] => 1
[useFetchArray] =>
[_genSeqSQL] => create table %s (id integer)
[_autocommit] => 1
[_haserrorfunctions] => 1
[_has_stupid_odbc_fetch_api_change] => 1
[_lastAffectedRows] => 0
[uCaseTables] => 1
[_dropSeqSQL] => drop table %s
[database] =>
[host] => DRIVER={Microsoft Visual FoxPro Driver};SOURCETYPE=dbf;SOURCEDB=C:\Sites\hub.fieldfresh.dev\_cache\VP10\PRAXIS\;EXCLUSIVE=NO;
[user] =>
[password] =>
[debug] =>
[maxblobsize] => 262144
[concat_operator] => +
[substr] => substr
[length] => length
[random] => rand()
[upperCase] => upper
[nameQuote] => "
[charSet] =>
[metaDatabasesSQL] =>
[metaTablesSQL] =>
[uniqueOrderBy] =>
[emptyDate] =>
[emptyTimeStamp] =>
[lastInsID] =>
[hasInsertID] =>
[hasLimit] =>
[readOnly] =>
[hasMoveFirst] =>
[hasGenID] =>
[genID] => 0
[raiseErrorFn] =>
[isoDates] =>
[cacheSecs] => 3600
[memCache] =>
[memCacheHost] =>
[memCachePort] => 11211
[memCacheCompress] =>
[sysUTimeStamp] =>
[arrayClass] => ADORecordSet_array
[noNullStrings] =>
[numCacheHits] => 0
[numCacheMisses] => 0
[pageExecuteCountRows] => 1
[uniqueSort] =>
[leftOuter] =>
[rightOuter] =>
[autoRollback] =>
[poorAffectedRows] =>
[fnExecute] =>
[fnCacheExecute] =>
[blobEncodeType] =>
[rsPrefix] => ADORecordSet_
[autoCommit] => 1
[transOff] => 0
[transCnt] => 0
[fetchMode] => 2
[null2null] => null
[bulkBind] =>
[_oldRaiseFn] =>
[_transOK] =>
[_connectionID] => Resource id #8
[_errorMsg] => [Microsoft][ODBC Visual FoxPro Driver]Syntax error.
[_errorCode] => 37000
[_queryID] =>
[_isPersistentConnection] =>
[_evalAll] =>
[_affected] =>
[_logsql] =>
[_transmode] =>
[_error] =>
)
The error doesn't say much. I can copy and paste the code into MySQL and it runs fine and returns what I expect. Hopefully another set of eyes, with more Fox Pro experience, can see what the issue is here.
Thanks for any assistance.
Your subquery is incorrect. This is the subquery:
INNER JOIN (
SELECT custno, item, MAX(shipdate) as last_shipdate
FROM sotran01
WHERE shipdate >= {d'2013-05-23'}
AND shipdate <= {d'2014-05-23'}
GROUP BY custno, item, last_shipdate
-------------------------------^
)
That is an aggregation column. Remove it:
INNER JOIN (
SELECT custno, item, MAX(shipdate) as last_shipdate
FROM sotran01
WHERE shipdate >= {d'2013-05-23'}
AND shipdate <= {d'2014-05-23'}
GROUP BY custno, item
)
Anyone know anything about troubleshooting a PHP Curl problem? I have been using RollingCurl with great success on my OSX laptop, however when I upload to my Ubuntu server the same code fails to yield a result.
So there is clearly something wrong server side, the error logs are clean. I have no idea what to check... any help? Anyone!?
Thank you so much in advance, Stu
![http://rolling-curl.googlecode.com/svn/trunk/
Ubuntu 12.04 result
Array ( \[url\] => \[content_type\] => \[http_code\] => 0 \[header_size\] => 0 \[request_size\] => 0 \[filetime\] => 0 \[ssl_verify_result\] => 0 \[redirect_count\] => 0 \[total_time\] => 0 \[namelookup_time\] => 0 \[connect_time\] => 0 \[pretransfer_time\] => 0 \[size_upload\] => 0 \[size_download\] => 0 \[speed_download\] => 0 \[speed_upload\] => 0 \[download_content_length\] => -1 \[upload_content_length\] => -1 \[starttransfer_time\] => 0 \[redirect_time\] => 0 \[certinfo\] => Array ( ) \[redirect_url\] => )
Local OSX Leoard result
Array ( \[url\] => http://www.google.co.uk/ \[content_type\] => text/html; charset=ISO-8859-1 \[http_code\] => 200 \[header_size\] => 1535 \[request_size\] => 108 \[filetime\] => -1 \[ssl_verify_result\] => 0 \[redirect_count\] => 1 \[total_time\] => 0.597785 \[namelookup_time\] => 0.033881 \[connect_time\] => 0.070866 \[pretransfer_time\] => 0.070939 \[size_upload\] => 0 \[size_download\] => 43439 \[speed_download\] => 72666 \[speed_upload\] => 0 \[download_content_length\] => 221 \[upload_content_length\] => 0 \[starttransfer_time\] => 0.171418 \[redirect_time\] => 0.147887 )][1]
if (ini_get('safe_mode') == 'Off' || !ini_get('safe_mode')) {
$options[CURLOPT_FOLLOWLOCATION] = 1;
$options[CURLOPT_MAXREDIRS] = 5;
}
Here is the problem... I commented it out to test and BANG.... flys into action. Thank you all for your help.
Check your php.ini configuration file in your ubuntu server. In this article you can see many of the good practices that many people (and now, some packages do) to protect their servers from attacks: http://blog.up-link.ro/php-security-tips-securing-php-by-hardening-php-configuration/
UPDATE.
To make it more clear:
Log in to your ubuntu server as an admin user.
Change to the configuration file's directory
cd /etc/PHP5
Search in the file for allow_url_fopen = Off value by using the following command:
sudo nano php.ini
Change the value to On and press CTRL+X answer "yes" to save the changes and quit nano editor