Discussion:
[corosync] Corosync performance tuning on 10Gbe
安泱
2014-09-25 13:15:41 UTC
Permalink
Hi all,

I run 3 nodes corosync cluster with cman, and above it run qpidd cluster mode service.
All the packages is from CentOS 6.5 repo and MRG clone repo, http://glitesoft.cern.ch/cern/mrg/slc6X/x86_64/RPMS/.

I test the cluster system with qpid-send and qpid-receive tools, the max bandwidth is about 117-119MB/s.

When I run standalone qpid performance test, the bandwidth will reach more than 9Gb/s, so I think the operating system and hardware tuning is good.

Corosync' s totem protocol has many options, maybe the default configuration is suit for Gbe network, could you give any advices, let corosync cluster run full speed on 10Gbe?

Thanks in advance.

Bests.
An Yang
Jan Friesse
2014-09-25 15:43:53 UTC
Permalink
anyang,
Post by 安泱
Hi all,
I run 3 nodes corosync cluster with cman, and above it run qpidd cluster mode service.
All the packages is from CentOS 6.5 repo and MRG clone repo, http://glitesoft.cern.ch/cern/mrg/slc6X/x86_64/RPMS/.
I test the cluster system with qpid-send and qpid-receive tools, the max bandwidth is about 117-119MB/s.
When I run standalone qpid performance test, the bandwidth will reach more than 9Gb/s, so I think the operating system and hardware tuning is good.
Corosync' s totem protocol has many options, maybe the default configuration is suit for Gbe network, could you give any advices, let corosync cluster run full speed on 10Gbe?
Corosync was not tested in 10Gbe environment (with exception of IBA).
And honestly, I don't think 10Gb can be achieved with standard MTU. So
my advise would be to try increase MTU of NIC and then increase MTU in
corosync.conf.

Also corosync 2.x contains many optimizations so it can behave better.

Regards,
Honza
Post by 安泱
Thanks in advance.
Bests.
An Yang
_______________________________________________
discuss mailing list
http://lists.corosync.org/mailman/listinfo/discuss
An Yang
2014-09-26 00:35:14 UTC
Permalink
Thanks Jan.


------------------ Original ------------------
From: "Jan Friesse"<***@redhat.com>;
Date: Thu, Sep 25, 2014 11:43 PM
To: "°²ãó"<***@waycooler.co>; "discuss"<***@corosync.org>;

Subject: Re: [corosync] Corosync performance tuning on 10Gbe


anyang,
Post by 安泱
Hi all,
I run 3 nodes corosync cluster with cman, and above it run qpidd cluster mode service.
All the packages is from CentOS 6.5 repo and MRG clone repo, http://glitesoft.cern.ch/cern/mrg/slc6X/x86_64/RPMS/.
I test the cluster system with qpid-send and qpid-receive tools, the max bandwidth is about 117-119MB/s.
When I run standalone qpid performance test, the bandwidth will reach more than 9Gb/s, so I think the operating system and hardware tuning is good.
Corosync' s totem protocol has many options, maybe the default configuration is suit for Gbe network, could you give any advices, let corosync cluster run full speed on 10Gbe?
Corosync was not tested in 10Gbe environment (with exception of IBA).
And honestly, I don't think 10Gb can be achieved with standard MTU. So
my adovise would be to try increase MTU of NIC and then increase MTU in
corosync.conf.

I use both 9000 and 8982 MTU, the performance is almost the same.

Also corosync 2.x contains many optimizations so it can behave better.

Regards,
Honza
Post by 安泱
Thanks in advance.
Bests.
An Yang
_______________________________________________
discuss mailing list
http://lists.corosync.org/mailman/listinfo/discuss
An Yang
2014-10-05 09:45:32 UTC
Permalink
Hi Friesse,
Post by Jan Friesse
Corosync was not tested in 10Gbe environment (with exception of IBA).
I got corosync run in FDR IB environment by setting transports="rdma" in cman's configuration file , and run the same benchmark test, the performance is not better than 10Gbe.

Should I create a new thread to discuss the performance tunning in FDR IB environment?
Jan Friesse
2014-10-06 07:49:40 UTC
Permalink
An,
Post by An Yang
Hi Friesse,
Post by Jan Friesse
Corosync was not tested in 10Gbe environment (with exception of IBA).
I got corosync run in FDR IB environment by setting transports="rdma" in cman's configuration file , and run the same benchmark test, the performance is not better than 10Gbe.
Should I create a new thread to discuss the performance tunning in FDR IB environment?
Actually, IB itself is not very well supported in corosync and we will
probably remove it's support completely one day (if nobody will take
care of maintaining it).

Can you please give a try to corosync 2.x? Is performance still bad (on
10GbE)?

Regards,
Honza
An Yang
2014-10-06 15:36:37 UTC
Permalink
Post by Jan Friesse
Can you please give a try to corosync 2.x? Is performance still bad (on
10GbE)?
If qpid-cpp-server-cluster or qpid-cpp-server-ha plugins could run with corosync 2.x, I would like to try it.

Did you ever hear any successful story?
Jan Friesse
2014-10-07 06:43:48 UTC
Permalink
An,
Post by Jan Friesse
Post by Jan Friesse
Can you please give a try to corosync 2.x? Is performance still bad (on
10GbE)?
If qpid-cpp-server-cluster or qpid-cpp-server-ha plugins could run with corosync 2.x, I would like to try it.
Did you ever hear any successful story?
I believe qpid migrated to active-passive HA, so there should be no
longer problem with corosync itself (because corosync communication
layer is not used any more for qpid messages transport).


But this is probably better to ask on qpid ml.

Regards,
Honza

Loading...