Scaling SugarCRM using logical domains on Sun SPARC Enterprise T5440

Introduction

Today Sun launches the T5440 server based on the UltraSPARC T2 plus processor. With 8 cores per processor and 8 threads per core, this 4-socket server offer up to 256 threads /256+GB memory. It also has a hypervisor provides an interface for resource management. The following shows how we scaled SugarCRM 5.1 on T5440 using the Sun's virtualization solution featuring Logical Domains (ldoms).

Results

 This demonstrates a single Sun SPARC Enterprise T5440 configured with Logical Domains hosting open source applications with SugarCRM and MySQL supporting 3200 concurrent users.   It shows almost linear scalability from the 900 concurrent users supported on Sun SPARC Enterprise T5120 to 3200 users on Sun SPARC Enterprise T5440.

# of Domains

Maximum No. of Concurrent Users passing > 90% response time less than 2 seconds

Number of Requests served

Test Duration

1

900

46,320

30 minutes

2

1600

81,428

30 minutes

4

3200

149,093

30 minutes

Each guest domain was configured with 4 core and 16G RAM, and was approximately 60% utilized. The primary domain was configured with 16 core and 32G RAM and was approximately 100% utilized.

Physical test environment

 Sun SPARC Enterprise T5440 was configured to host both the MySQL database and SugarCRM application. Sun Storage J4200 is used storage, two volumes are created to store datafiles and logfiles. A total of 4 X4500 servers were used as front-end clients. All servers and clients ran on the OpenSolaris Operating System.

Sun SPARC Enterprise T5440

  • 4 x 1.4 GHz UltraSPARC T2 plus Processor

  • 256 GB RAM

  • 5 x 10/100/1000Mbps Ethernet

Sun Storage J4200

  •  146GB (12 x 146GB SAS Disks)

Server Configuration

For this project four instances of SugarCRM were deployed on all 4 guest domains, with MySQL instances resided on the control domain.

Logical Domain configurations

The server was configured with four guest domains, and five virtual switches connected to five physical Gigabit Ethernet ports. Each guest domain was configured with two virtual networks, vnet0, and vnet1.

# ldm list

NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME

primary active -n-cv SP 128 54G 0.1% 3d 22h 26m

dom1 active -n--- 5001 32 16G 0.0% 2d 15h 24m

dom2 active -n--- 5002 32 16G 0.0% 2d 13h 4m

dom3 active -n--- 5005 32 16G 0.0% 2d 6h 5m

dom4 active -n--- 5006 32 16G 0.0% 2d 13h 35m

The virtual switch, vswitch0, was used to connect the domains and the database server. Vswitch 1-4 were used for requests coming from the clients. Each guest domain were connected to the client and database through the virtual networks, i.e. vnet0 and vnet1.

    VSW

    NAME             MAC               NET-DEV   DEVICE     MODE

    vswitch0         00:14:4f:fa:e0:9d nxge5     switch@0   prog,promisc

        PEER                        MAC

        vnet0@dom1                  00:14:4f:fa:f9:53

        vnet0@dom2                  00:14:4f:f9:58:ef

        vnet0@dom5                  00:14:4f:f9:11:ec

        vnet0@dom6                  00:14:4f:fb:d9:4b

        vnet1@primary               00:14:4f:fa:20:8a

        vnet2@primary               00:14:4f:fb:19:b3

    NAME             MAC               NET-DEV   DEVICE     MODE

    vswitch1         00:14:4f:f8:fc:a8 nxge6     switch@1   prog,promisc

        PEER                        MAC

        vnet1@dom1                  00:14:4f:f8:38:ad

    NAME             MAC               NET-DEV   DEVICE     MODE

    vswitch2         00:14:4f:f9:9d:71 nxge7     switch@2   prog,promisc

        PEER                        MAC

        vnet1@dom2                  00:14:4f:f8:b6:e8

    NAME             MAC               NET-DEV   DEVICE     MODE

    vswitch3         00:14:4f:f9:2c:83 e1000g0   switch@3   prog,promisc

        PEER                        MAC

        vnet1@dom5                  00:14:4f:f9:aa:81

    NAME             MAC               NET-DEV   DEVICE     MODE

    vswitch4         00:14:4f:f9:2d:21 e1000g1   switch@4   prog,promisc

        PEER                        MAC

        vnet1@dom6                  00:14:4f:f8:06:27


Software Installation and Configuration

Following components were installed on the domains. Refer to Satish Vanga's blog on detail installation of each components.

Primary Domain

  • MySQL 5.0.67

Guest Domain

  • Sun Web Server 7.0
  • Coolstack 1.2 (Runtime, PHP5)
  • SugarCRM 5.1
  • eAccelerator 0.9.5.2

As we increased beyond 2 logical domains, we were running into MySQL scalability bottlenecks where we did not see further improvement in throughput. A second instance of MySQL was created to serve the third and fourth domains. 2 instances of MySQL 5.0.67 is configured to listen to port 3306 and 3307. Following were the changes made to run 2 MySQL instances:

  1. Create 2 MySQL configuration files, eg. /etc/my.cnf (default), /etc/my.cnf.2

  2. Change the following settings of the config file:

  • 1st instance of myslq:

port = 3306

socket = /tmp/mysql

datadir = "/dbpool/data/5.0"

innodb_data_home_dir = /dbpool/data/5.0

innodb_log_group_home_dir = /logpool/logs/5.0

  • 2nd instance of mysql:

port = 3307

socket = /tmp/mysql.sock2

datadir = "/dbpool/data/5.0-2"

innodb_data_home_dir = /dbpool/data/5.0-2

innodb_log_group_home_dir = /logpool/logs/5.0-2


3.   On the guest domains, modify the Sugar file ./include/database/MysqlManager.php to:

      $this->database = mysqli_connect($configOptions['db_host_name'],

$configOptions['db_user_name'],$configOptions['db_password'],"sugarcrm","3307")

For reference, the SugarCRM forum has a thread discussing this topic at https://www.sugarcrm.com/forums/showthread.php?t=33987


Test Workload

We used Tidbit provided by SugarCRM for to generate seed data. The database was populated with 2000 users with a load factor of 2000.

Business Objects

Number of records

Users

2000

Accounts

2000

Calls

48000

Emails

32000

Contancts

8000

Leads

8000

Opportunities

4000

Cases

8000

Bugs

6000

Meetings

16000

Tasks

8000

Notes

8000

Total Records\*

\*602,644

A total of 602,644 records were inserted to initialize the database for the tests. It includes the main objects and the relationship rows that link the data together.

Tuning Options used

Solaris 10 Operation System

Following tuning options were applied to the /etc/system file on primary and all guest domains:

set autoup=600

set rlim_fd_max=32768

set rlim_fd_cur=32768

set ip_squeue_soft_ring=1

set ip:ip_soft_rings_cnt=8

Also to support higher http requests the following ndd parameters were applied.

ndd -set /dev/tcp tcp_conn_req_max_q 16384

ndd -set /dev/tcp tcp_conn_req_max_q0 16384

ndd -set /dev/tcp tcp_naglim_def 1

ndd -set /dev/tcp tcp_smallest_anon_port 2048

Mysql Configuration

Settings for max_connections, table_cache, innodb_thread_concurrency, innodb_flush_log_at_trx are applied to support higher concurrency users. Because innodb_flush_log_at_trx_commit is set to 0 (meaning no log flushing on each transaction commit), use a battery backed up cache on a RAID controller to prevent data loss in case of mysqld process crash.

[client]
port = 3306
socket = /tmp/mysql.sock

[mysqld]
port = 3306
socket = /tmp/mysql.sock
datadir = "/dbpool/data/5.0"
back_log = 50

max_connections = 5000
max_connect_errors = 10
table_cache = 4800
max_allowed_packet = 2M
binlog_cache_size = 8M
max_heap_table_size = 128M
sort_buffer_size = 64M
join_buffer_size = 128M
thread_cache_size = 3000
thread_concurrency = 512
record_buffer = 8M
query_cache_size = 256M
query_cache_type = 1
query_cache_limit = 4M
query_prealloc_size=65536
ft_min_word_len = 4
default-storage-engine = innodb
thread_stack = 192K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 1024M

log_long_format

server-id = 1
key_buffer_size = 64M
read_buffer_size = 8M
read_rnd_buffer_size = 64M
bulk_insert_buffer_size = 128M

innodb_additional_mem_pool_size = 1024M
innodb_buffer_pool_size = 6G
innodb_data_file_path = ibdata1:4096M:autoextend
innodb_data_home_dir = /dbpool/data/5.0
innodb_file_io_threads = 8
innodb_thread_concurrency = 64
innodb_flush_log_at_trx_commit = 0
innodb_log_buffer_size = 16M
innodb_log_file_size = 1G
innodb_log_files_in_group = 2
innodb_log_group_home_dir = /logpool/logs/5.0
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
innodb_locks_unsafe_for_binlog = 1
innodb_adaptive_hash_index = 0


Sun Web Server 7.0 Tuning

# Sun Web Server magnus.conf
#
# Copyright 2006 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
Init fn="load-modules" shlib="libfastcgi.so"


# Sun Web Server obj.conf
#
# Copyright 2006 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# You can edit this file, but comments and formatting changes
# might be lost when you use the administration GUI or CLI.
<Object name="default">
NameTrans fn="assign-name" from="\*.php" name="fcgiPHP"
PathCheck fn="find-index" index-names="index.html,home.html,index.jsp,index.php" ObjectType? fn="type-by-extension"

ObjectType fn="force-type" type="text/plain"
Service method="(GET|HEAD|POST)" type="\*~magnus-internal/\*" fn="send-file"
AddLog fn="flex-log"
</Object>

<Object name="fcgiPHP">
Service fn="responder-fastcgi" app-path="/opt/coolstack/php5/bin/php-cgi" bind-path="localhost:3101" app-env="PHPRC=/opt/coolstack/php5/lib" app-env="PHP_FCGI_CHILDREN=650" app-env="PHP_FCGI_MAX_REQUEST=4500" app-env="FCGI_WEB_SERVER_ADDRS=localhost" req-retry="10" min-procs="1" max-procs="1" restart-interval="0" reuse-connection="true" connection-timeout="36000" resp-timeout="9999999" listen-queue="8192" UseOutputStreamSize? ="147456"
</Object>

<Object name="j2ee">
Service fn="service-j2ee" method="\*"
</Object>

<Object name="es-internal">
</Object>

<Object name="cgi">
ObjectType fn="force-type" type="magnus-internal/cgi"
Service fn="send-cgi"
</Object>

<Object name="send-precompressed">
PathCheck fn="find-compressed"
</Object>

<Object name="compress-on-demand">
Output fn="insert-filter" filter="http-compression"
</Object>


PHP and eAccelerator Tuning

[PHP]
cgi.fix_pathinfo = 1
memory_limit = 320M ; Maximum amount of memory a script may consume (8MB)
default_socket_timeout = 1800
safe_mode = 0
post_max_size = 20M
upload_max_filesize = 20M
date.timezone = "US/Pacific"

[Session]
session.use_cookies = 1
session.cookie_lifetime = 3600
session.gc_probability = 1
session.gc_divisor = 5000
session.gc_maxlifetime = 0
session.entropy_file = "/dev/urandom"
session.save_path = "/tmp/sessions"

include_path=/sun/webserver7/https-server/docs/SugarCE-Full-5.1.0:/opt/coolstack/php5/lib/php:.:
extension_dir=/opt/coolstack/php5/lib/php/extensions/no-debug-non-zts-2006061
extension="mysql.so"
extension="mysqli.so"
extension="curl.so"
extension="zlib.so"

#
# eAccelerator 0.9.5.2
#
extension="eaccelerator-0952.so"
eaccelerator.shm_size="128"
eaccelerator.cache_dir="/tmp/eaccelerator"
eaccelerator.enable="1"
eaccelerator.optimizer="1"
eaccelerator.check_mtime="0"
eaccelerator.debug="0"
eaccelerator.filter=""
eaccelerator.shm_max="0"
eaccelerator.shm_ttl="0"
eaccelerator.shm_prune_period="0"
eaccelerator.shm_only="0"
eaccelerator.compress="0"
eaccelerator.compress_level="9"
eaccelerator.allowed_admin_path="/opt/coolstack/apache2/htdocs"
eaccelerator.log_file = "/tmp/eaccelerator_log"
eaccelerator.content="none"
eaccelerator.sessions="none"


SugarCRM Tuning

Following tuning options were applied to SugarCRM by modifying the config_override.php file:

$sugar_config['disable_count_query'] = true

$sugar_config['disable_vcr'] = true

$sugar_config['verify_client_ip'] = false

$sugar_config['calculate_response_time'] = false

In addition, the Tracker feature was disabled to avoid db lock contention. The Tracker settings could be changed by logging in as admin user and navigate through the Admin tab to Enable/Disable tracking.


Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

Yun Tseng Chew

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today