1.Hi, all, I am a RabbitMQ user in China. When we use in cluster pattern, we have found a problem. The problem is that if we use many producers (such as fifty) to send messages to the same queue ceaselessly and the connection to the RabbitMQ cluster is not the server which the queue is , the memory increases very quickly. If we use less producers (such as five) or we use the connection to the server which the queue is, the memory is the normal. The consumers have enough power to consume the message. The version of RabbitMQ is 2.8.1. Has anyone met the same problem? The attachment is the monitor interface. Thank all of you very much. --
2.Matthias Radestock 8月13日 (1 天前) 发送至 Discussions, 我 Please post the output of 'rabbitmqctl report' for all three machines in your cluster at the time the memory threshold has been exceeded on the queue node.
3.Liu Hao 10:40 (7 小时前) 发送至 Matthias, Discussions The destination queue's name is "mqclient_test_queue_1" and it is on the cnbj-cuc-tst01-crl0015 node. The rabbitmqctl report output of cnbj-cuc-tst01-crl0015 node is "cnbj-cuc-tst01-crl0015" attachment, the other two are similar. The picture is the monitor interface. Thank you very much.
4.Matthias Radestock 12:12 (5 小时前) 发送至 我, Discussions Ah, I completely forgot that 'report' reports on all nodes. Sorry. There are about 1100 connections and 8400 channels. Are those the How big are the messages? Please run the following on crl0015 when it is using lots of memory: rabbitmqctl eval 'begin {L, Pid} = (all on one line) and post the output.
Matthias.
5.Liu Hao 14:14 (3 小时前) 发送至 Matthias, Discussions The connections and channels are actually too much. I decrease the connections and channels. Now, I have 40 consumer connections (one connection with one channel), and 50 producer connections (one connection with 10 channels). The memory is the same , acquires a lot. But I find an interesting fact that If I use 50 producer connections (one connection with only one channel) , the memory will be under 2G, but the most connections are flowed and the publish rate is too slow. This is just a test demo, and one message is 10KB. The command report is so big(35M), and I give the beginning and the end of the output to you as the attachment. Thank you very much.
6.Matthias Radestock 16:38 (1 小时前) 发送至 我, Discussions But I find an interesting fact that If I use 50 producer This is just a test demo, and one message is 10KB. The command report is so big(35M), and I give the beginning and I think you are simply pushing rabbit beyond the limit of its capability. Internal flow control happens on a per-process-link basis, so when you increase the number of publishing channels that corresponds to a linear increase in the amount of internal buffer space that is potentially required. To the point where all memory is taken up by messages sitting in these buffers. Publishing across nodes carries an extra cost, so the buffers will fill up at lower publishing rates. If with 50 producer connections x 1 channel you see most connections flowed, then that is an indications that rabbit is already operating at capacity but is still able to keep overall memory use under control. Adding more producer connections/channels will not increase the sustainable sending rate but will degrade rabbit's ability to control memory use.
Regards, Matthias.
7.Matthias Radestock 16:46 (1 小时前) 发送至 Discussions, 我 You may also want to enable hipe compilation - see http://www./configure.html. I doubt any of this will make much difference though since the bottleneck in your system are the queues, and hipe compilation and most of the performance improvements have little impact on queue performance. |
|