Can php-fpm pools have NICE values?
Each one of my pools run under it's own user and group, like:
[pool1]
user = pool1
group = pool1
...
I've tried to create /etc/security/limits.d/prio.conf with contents:
#pool1 hard priority 39
But with htop that pool still have she same PRI and NI values than other pools after reboot.
Just use https://www.php.net/manual/install.fpm.configuration.php#worker-process-priority
[pool]
process.priority = 10
Related
I'm having problems with timeout using the DataSax php driver for Cassandra.
Whenever I execute a certain command it always throws this exception after 10s:
PHP Fatal error: Uncaught exception 'Cassandra\Exception\TimeoutException' with message 'Request timed out'
My php code is like this:
$cluster = Cassandra::cluster()->build();
$session = $cluster->connect("my_base");
$statement = new Cassandra\SimpleStatement("SELECT COUNT(*) as c FROM my_table WHERE my_colunm = 1 AND my_colunm2 >= '2015-01-01' ALLOW FILTERING")
$result = $session->execute($statement);
$row = $result->first();
My settings in cassandra.yaml is:
# How long the coordinator should wait for read operations to complete
read_request_timeout_in_ms: 500000
# How long the coordinator should wait for seq or index scans to complete
range_request_timeout_in_ms: 1000000
# How long the coordinator should wait for writes to complete
write_request_timeout_in_ms: 2000
# How long the coordinator should wait for counter writes to complete
counter_write_request_timeout_in_ms: 50000
# How long a coordinator should continue to retry a CAS operation
# that contends with other proposals for the same row
cas_contention_timeout_in_ms: 50000
# How long the coordinator should wait for truncates to complete
# (This can be much longer, because unless auto_snapshot is disabled
# we need to flush first so we can snapshot before removing the data.)
truncate_request_timeout_in_ms: 60000
# The default timeout for other, miscellaneous operations
request_timeout_in_ms: 1000000
I've already tried this:
$result = $session->execute($statement,new Cassandra\ExecutionOptions([
'timeout' => 120
])
);
and this:
$cluster = Cassandra::cluster()->withDefaultTimeout(120)->build();
and this:
set_time_limit(0)
And it always throws the TimeoutException after 10s..
I'm using Cassandra 3.6
Any idea?
Using withConnectTimeout (instead of, or together with withDefaultTimeout) might help avoid a TimeoutException (it did in my case)
$cluster = Cassandra::cluster()->withConnectTimeout(60)->build();
However, if you need such a long timeout, then there is probably an underlying problem that will need solving eventually.
Two things you are doing wrong.
ALLOW FILTERING : Be careful. Executing this query with allow filtering might not be a good idea as it can use a lot of your computing resources. Don't use allow filtering in production Read the
datastax doc about using ALLOW FILTERING
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/select_r.html?hl=allow,filter
count() : It also a terrible idea to use count(). count() actually pages through all the data. So a select count() from userdetails without a limit would be expected to timeout with that many rows. Some details here: http://planetcassandra.org/blog/counting-key-in-cassandra/
How to Fix it ?
Instead of using ALLOW FILTERING, You should create index table of
your clustering column if you need query without partition key.
Instead of using count(*) you should create a counter table
i have a php script that processes large XML files, and saves data from them into a database. I use several classes, to process, and store the data in the PHP script, before saving to the database, and read in the XML, node by node, to preserve memory. Basicaly, a loop in my file looks like this:
while ($Reader->read()) {
$parsed++;
if (time() >= $nexttick) {
$current=microtime(true)-$ses_start;
$eta=(($this->NumberOfAds-$parsed)*$current)/$parsed;
$nexttick=time()+3;
$mem_usage=memory_get_usage();
echo "Parsed $parsed # $current secs \t | ";
echo " (mem_usage: " . $mem_usage . " \t | ";
echo "ETA: $eta secs\n";
}
$node=$Reader->getNode();
// OMMITED PART: $node is an array, I make some processing, and check if everything exists in the array that I need in the following section
$Ad=new Ad($node); // creating an Ad object from the node
// OMMITED PART: Making some additional SQL queries, to check the integrity of the data, before uploading it to the database
if (!$Ad->update()) {
//add wasn't inserted succesfully, saving a row in a second database table, to log this information
} else {
//add succesfully inserted, saving a row in a second database table, to log this information
}
}
You notice, that the first part of the loop, is a little output tool, that outputs the progress of the file, every 3 seconds, and also outputs the memory usage of the script. I need that because I ran into a memory problem, the last time I was trying to upload a file, and wanted to figure out, what's eating away the memory.
The output of this script looked something like this when I ran it:
Parsed 15 # 2.0869598388672 secs | (mem_usage: 1569552 | ETA: 1389.2195994059 secs
Parsed 30 # 5.2812438011169 secs | (mem_usage: 1903632 | ETA: 1755.1333565712 secs
Parsed 38 # 8.4330480098724 secs | (mem_usage: 2077744 | ETA: 2210.7901124829 secs
Parsed 49 # 11.377414941788 secs | (mem_usage: 2428624 | ETA: 2310.5440017496 secs
Parsed 59 # 14.204828023911 secs | (mem_usage: 2649136 | ETA: 2393.3931421304 secs
Parsed 69 # 17.032008886337 secs | (mem_usage: 2831408 | ETA: 2451.3750760901 secs
Parsed 79 # 20.359696865082 secs | (mem_usage: 2968656 | ETA: 2556.8171214997 secs
Parsed 87 # 23.053930997849 secs | (mem_usage: 3102360 | ETA: 2626.8231951916 secs
Parsed 98 # 26.148546934128 secs | (mem_usage: 3285096 | ETA: 2642.0705279769 secs
Parsed 107 # 29.092607021332 secs | (mem_usage: 3431944 | ETA: 2689.8426286172 secs
Now, I know for certainty, that in my MySQL object, I have a runtime cache, which stores the results of some basic select queries in an array, for quick access later. This is the only variable in the script (that I know of), which increases in size throughout the whole script, so I tried turning of this option. The memory usage dropped, but only by a tiny bit, and it was still rising throughout the whole script.
My questions are the following:
Is the slow rising of the memory usage throughout a long running script a normal behaviour in php, or I should search through the whole code, and try to find out what is eating up my memory?
I know that by using unset() on a variable, I can free up the space it takes away from the memory, but do I need to use unset() even if I am overwriting the same variable throughout the whole file?
A slight rephrasing of my second question with an example:
Are the following two code blocks produce the same result regarding memory usage, or if not, which one is more optimal?
BLOCK1
$var = str_repeat("Hello", 4242);
$var = str_repeat("Good bye", 4242);
BLOCK2
$var = str_repeat("Hello", 4242);
unset($var);
$var = str_repeat("Good bye", 4242);
If you install the xdebug module on your development machine, you can get it to do a function trace - and that will show memory usage for each line - https://xdebug.org/docs/execution_trace
That'll probably help you identify where the space is going up - you can then try to unset($var) etc and see if it makes any difference.
using exec function, i can execute an external program.
I got the total number of HTTP process by using following code.
$count = exec("ps -ef | grep http | wc -l");
And now , this is my question. How can I get the total number of HTTP process from a specific IP?
Thank you.
I assume you are on a linux system. You can retrieve socket statistics through the ss utility. E.g. in order to list all connections to your http or https port, you can use:
ss -t '( sport = :http or sport = :https )'
You can further filter this by IP. So let's say you want to filter all connections by the remote address 1.2.3.4:
ss -t '( sport = :http or sport = :https )' dst 1.2.3.4
Now mapping connections to actual processes is a bit tricky as traditionally, one connection has been handled by one process each. But this isn't always the case. You can let ss display the listening programs with the p switch like so:
ss -tp '( sport = :http or sport = :https )' dst 1.2.3.4
You will find that ss is listing those conveniently in one line, so we can grep these out and count the uniques:
ss -tp '( sport = :http or sport = :https )' dst 1.2.3.4 | grep users | sort | uniq | wc -l
Putting this together:
$count = exec(sprintf(
'ss -tp "( sport = :http or sport = :https )" dst %s | grep users | sort | uniq | wc -l',
escapeshellarg($remoteAddress)
));
I got a very simple select statement for instance:
SELECT id, some_more_fields_that_do_not_matter
FROM account
WHERE status = '0'
LIMIT 2
Keep in mind that the above returns the following id's: 1,2
The next thing I do is loop these rows in a for each and update some records:
UPDATE account
SET connection = $_SESSION['account_id'],
status = '1'
WHERE id = $row_id
Now the rows in the table with id's 1,2 have the status '1' (which I do check to make sure the rows where correctly updated). If it failed to do such, I will undo everything. As soon as everything is OK I have a counter at the first place which is 2 in this case, so 2 rows should have been updated, and I check this with a simple COUNT(*).
This information will also be emailed with for instance the following data (which means everything was updated correctly):
- Time of update: 2013-09-30 16:30:02
- Total rows to be updated (selected) = 2
- Total rows successfully updated after completing queries = 2
The following id's should have been updated
(these where returned by the SELECT statement):
1,2
So far so good. Now comes the weird part.
The very next query made by a other user will however sometimes return for instance the id's 1,2 (but that's impossible because these should never be returned by the SELECT statement because they do not contain the status '0' anymore. So what happens is the following:
I now receive an email with for instance:
- Time of update: 2013-09-30 16:30:39
- Total rows to be updated (selected) = 10
- Total rows successfully updated after completing queries = 8
The following id's should have been updated
(these where returned by the SELECT statement):
1,2,3,4,5,6,7,8,9,10
Now it is really strange the 1 and 2 are selected by the update. In most cases it goes good, but very rarely it just doesn't and returns some same ID's which are already updated with a status '1'.
Notice the time between these updates. It's not even the same time. I first thought it would be something like that these queries would be executed at the exact same time (which is impossible right?). Or is this possible? Or could it somehow be that the query has been cached, and should I edit some settings at my mysql.conf file?
I have never had this problem, I tried every way of updating but it seems to keep happening. Is it maybe possible at some way to combine these 2 queries in one huge update query? All the data is just the same and doesn't do anything strange. I hope someone has a clue what this could cause the problem and why this is randomly (rarely) happening.
EDIT:
I updated the script, added the microtime to check how long the SELECT, UPDATE and CHECK-(SELECT) together takes.
First member (with ID 20468) makes a call at: 2013-10-01 08:30:10
2/2 rows have been updated correctly of the following 2 ID's:
33412,33395
Queries took together 0.878005027771 seconds
Second member (with ID 10123) makes a call at: 2013-10-01 08:30:14
20/22 rows have been updated correctly of the following 22 ID's:
33392,33412,33395,33396,41489,13011,12555,27971,22811 and some more but not important
Queries took together 3.3440849781036 seconds
Now you see that the 33412 and 33395 are again returned by the SELECT.
Third member (with ID 20951) makes a call at: 2013-10-01 08:30:16
9/9 rows have been updated correctly of the following 9 ID's:
33392,33412,33395,33396,41489,13011,12555,27971,22811
Queries took together Didn't return anything which concerns me a
little bit too
Since we do not know how long the last queries took, we only know that the first and second should work correctly without problems because if you look there are 4 seconds between them. and the execution time was 3.34 seconds. Besides that the first one started at 2013-10-01 08:30:17 because the time that is logged for the call (when emailing it) is at the end of the script. The check to see how long the queries took are from the start of the first query and the stop directly after the last query and this is before I send the email (of course).
Could it be something in my.cnf file that mysql is doing this weird?
Still I don't understand why id didn't return any execution time for the last (third) call.
A solution for this would be to Queue these actions by first saving them into a table and executing them one at a time by a cron job. But it's not really what I want, it should be instant when a member makes the call. Thanks for the help so far.
Anyway here is my my.cnf in case someone has suggestions for me (Server has 16GB RAM installed):
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
max_connections = 20
query_cache_type = 1
query_cache_limit = 1M
query_cache_size = 4M
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
innodb_buffer_pool_size = 333M
join_buffer_size = 128K
tmp_table_size = 16M
max_heap_table_size = 16M
table_cache = 200
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
#no-auto-rehash # faster start of mysql but no tab completition
[isamchk]
key_buffer = 16M
!includedir /etc/mysql/conf.d/
EDIT 2:
$recycle_available = $this->Account->Membership->query("
SELECT Account.id,
(SELECT COUNT(last_clicks.id) FROM last_clicks WHERE last_clicks.account = Account.id AND last_clicks.roulette = '0' AND last_clicks.date BETWEEN '".$seven_days_back."' AND '".$tomorrow."') AS total,
(SELECT COUNT(last_clicks.id)/7 FROM last_clicks WHERE last_clicks.account = Account.id AND last_clicks.roulette = '0' AND last_clicks.date BETWEEN '".$seven_days_back."' AND '".$tomorrow."') AS avg
FROM membership AS Membership
INNER JOIN account AS Account ON Account.id = Membership.account
WHERE Account.membership = '0' AND Account.referrer = '0' AND Membership.membership = '1'
HAVING avg > 0.9
ORDER BY total DESC");
foreach($referrals as $key => $value)
{
$this->Account->query("UPDATE account SET referrer='".$account_id."', since='".$since_date."', expires='".$value['Account']['expires']."', marker='0', kind='1', auction_amount='".$value['Account']['auction_amount']."' WHERE id='".$recycle_available[$key]['Account']['id']."'");
$new_referral_id[] = $recycle_available[$key]['Account']['id'];
$counter++;
}
$total_updated = $this->Account->find('count',array('conditions'=>array('Account.id'=>$new_referral_id, 'Account.referrer'=>$account_id, 'Account.kind'=>1)));
You indicate in the comments you are using transactions. However, I can't see any $dataSource->begin(); nor $dataSource->commit(); in the PHP snippet you posted. Therefore, you must be doing $dataSource->begin(); prior to the snippet and $dataSource->commit(); or $dataSource->rollback(); after the snippet.
The problem is that you're updating and then trying to select prior to committing. No implicit commit is created, so you don't see updated data: http://dev.mysql.com/doc/refman/5.7/en/implicit-commit.html
It is hard to tell the reason of this strange behaviour without having my hands on the DB. But much better way to do what you are doing would be to do it all in one query:
UPDATE account
SET connection = $_SESSION['account_id'],
status = '1'
WHERE status = '0'
Most likely this will solve the problem you are facing.
I suggest using this syntax:
$query="UPDATE account SET connection = {$_SESSION['account_id']}, status = 1 WHERE id=$row_id;";
The compiler throws an error when you use an integer with ''. And remember to use {} when you have an array.
Hi I configured sphinx search in my test server.
Now I am getting this kind of an error "Sphinx_Query failed: no enabled local indexes to search".
I am not getting why this error. Any body can help me plese.
This is my sphinx conf
source objectcollection
{
type = mysql
sql_host = localhost
sql_user = root
sql_pass = root
sql_db = mydatabase
sql_port = 3306
sql_query = \
SELECT id, id as mid obtype_id, searchtext from tab_objectcollection;
sql_attr_uint = mid
sql_attr_uint = obtype_id
sql_query_info = SELECT * FROM tab_objectcollection WHERE id=$id
}
index combinedobject
{
source = objectcollection
path = /usr/local/sphinx/var/data/objectcollection
morphology = stem_en
min_stemming_len = 4
stopwords = /usr/local/sphinx/var/data/stopwords.txt
min_word_len = 3
min_prefix_len = 3
min_infix_len = 0
enable_star = 1
phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis
phrase_boundary_step = 100
html_strip = 1
}
indexer
{
# memory limit, in bytes, kiloytes (16384K) or megabytes (256M)
# optional, default is 32M, max is 2047M, recommended is 256M to 1024M
mem_limit = 256M
# maximum xmlpipe2 field length, bytes
# optional, default is 2M
#
max_xmlpipe2_field = 16M
# write buffer size, bytes
# several (currently up to 4) buffers will be allocated
# write buffers are allocated in addition to mem_limit
# optional, default is 1M
#
#write_buffer = 16M
}
searchd
{
listen = 3312
max_matches = 10000
log = /usr/local/sphinx/var/log/searchd.log
query_log = /usr/local/sphinx/var/log/query.log
pid_file = /usr/local/sphinx/var/log/searchd.pid
}
Thanks
Have you
Actully built the index - ie called 'indexer' program, to make the index files.
Started the Search Daemon - searchd
I think this error means that sphinx can't find the file(s) specified by "path" in your index. In my case I had:
path = /var/lib/sphinxsearch/data/delta
And I had ran the indexer successfully (or so I thought) like this:
indexer delta --rotate
It said there were some documents collected. HOWEVER it actually created these files:
/var/lib/sphinxsearch/data/delta.new.sp?
And searchd failed to rotate the files. Thus spake the log:
WARNING: rotating index 'delta': rename '/var/lib/sphinxsearch/data/delta.mvp' to '/var/lib/sphinxsearch/data/delta.old.mvp' failed: No such file or directory
The solution was: delete those new files and run indexer without --rotate the first time.
The fact that --rotate doesn't work the first time seems like a bit of a bug to me, but I can't really be bothered to submit a bug report. It probably requires me to register or some nonsense. Anyway, hope this helps.
What I Understand by your question is, in the configuration file you have to mention which table or data is to be Indexed into it. Also there could be a problem with Sphinx daemon, that it is not able to create the Indexed data and write it into files. Do check with the above.
Hope to be of help somehow.
This seems to be an issue with sphinx 2.0.5, it's filed here:
http://sphinxsearch.com/bugs/view.php?id=1268
Try using a different version (I tried 2.0.6 and the problem is gone)