How to handle Connection In Zimbra large deployment

How to handle connection in Zimbra large 

How to manage zimbra distribution list command line

smtp,lmtp,http,pop,imap,jetty,java

Zimbra is Power full Open Source mail server in market place. I am going to illustrate How to Tune Zimbra connections <http, IMAP, POP, SMTP>

Zimbra Web Server:: Zimbra uses Jetty as a Web application <Java Enabled> running on Apache Engine.. Jetty also offers support for idle but long lived HTTP connections without a dedicated thread. by defaults zimbraHttpNumThreads limite is 250. you con update it with 500 or 1000.

[root@mailstore1 libexec]$su - zimbra
[zimbra@mailstore1 libexec]$zmprov gs `zmhostname` zimbraHttpNumThreads
zimbraHttpNumThreads 250
[zimbra@mailstore1 libexec]$zmprov ms `zmhostname` zimbraHttpNumThreads 500

LMTP

LMTP is the protocol which act between Mailstore and Postfix MTA to send/receive mails . When possible, Postfix performs multiple LMTP transactions on the same connection. Message delivery is an expensive operation, so a handful of message delivery threads can keep the server busy, unless the message delivery threads become blocked on some resource. While it is tempting to increase the LMTP threads (and the corresponding Postfix LMTP concurrency setting) when MTA queues are behind and latency on message delivery is high, adding more concurrent load is unlikely to speed delivery – you will likely bottleneck your IO subsystem and risk making throughput lower because of contention. If you do experience mail queue backup because LMTP deliveries are slow, then do thread dumps on the mailbox server to see why the LMTP threads are unable to make progress. Another risk of a high LMTP concurrency is that is the event there is a bulk mailing, the server may become unresponsive because it is so busy with message deliveries. The default postfix LMTP concurrency and mailbox server LMTP threads is 20.

[zimbra@mailstore1 libexec]$ zmprov gs `zmhostname` zimbraLmtpNumThreads
zimbraLmtpNumThreads: 20

[zimbra@mailstore1 libexec]$ zmprov ms `zmhostname` zimbraLmtpNumThreads 40

MySQL

 

ZCS stores metadata about the content of mailboxes in a MySQL database. ZCS uses MySQL’s innodb storage engine. Innodb caches data from disk, and performs best when load doesn’t cause it to constantly evict pages from its cache and read new ones. Every mailbox store server has its own instance of MySQL so ZCS can scale horizontally. Inside each MySQL server instance, there are 100 mailbox groups each with its own database to avoid creating very large tables that store data for all users. 100 was somewhat arbitrary, but works well.

MySQL configuration for the mailbox server is stored in /opt/zimbra/conf/my.cnf which is not rewritten by a config rewriter, but is not preserved across upgrades.

Configure the following tunables. All settings below should be in the [mysqld] section.

Increase the table_cache and innodb_open_files settings to allow MySQL to keep more tables open at one time (can reduce DB I/O substantially). The default settings will be set to similar values when bug 32897 (increase table_cache and innodb_open_files) is implemented:

table_cache = 1200

innodb_open_files = 2710

Set innodb’s cache size. ZCS installer sets this to 40% of RAM in the system. There is a local config variable for mysql memory percent, but today my.cnf doesn’t get rewritten after install, so you have to edit my.cnf for this setting if you want to change it. The amount of memory you assign to MySQL and JVM together should not exceed 80% of system memory, and should be lower if you are running other services on the system as well. Here’s an example of 40% of a 8GB system:

innodb_buffer_pool_size = 3435973840

innodb_buffer_pool_size = 6871947680 (40% as per 16 GB ram) (We can Extend it up to 70% of RAM)

 

 

See here for the calculations for other RAM amounts: Memory Allocation

Innodb writes out pages in its cache after a certain percent of pages are dirty. The default is 90%. This default setting will minimize the total number of writes, but it would cause a major bottleneck in system performance when 90% is reached and database becomes unresponsive because the disk system is writing out all those changes in one shot. We recommend you set the dirty flush ratio to 10%, which does cause a lot more net total IO, but will avoids spiky write load.

innodb_max_dirty_pages_pct = 10

MySQL is configured to store its data in files, and the Linux kernel buffers file IO. The buffering provided by the kernel is not useful to innodb at all because innodb is making its own paging decisions – the kernel gets in the way. Bypass the kernel with:
<
innodb_flush_method = O_DIRECT

 

Current setting:

How to handle Connection In Zimbra large deployment,zimbra smtp,lmtp,http,pop,imap,jetty,java, connection handling in Zimbra mail 

 

thread_cache_size = 110

max_connections   = 110

# We do a lot of writes, query cache turns out to be not useful.

query_cache_type = 0

sort_buffer_size = 1048576

read_buffer_size = 1048576

# (Num mailbox groups * Num tables in each group) + padding

table_cache = 1200

innodb_buffer_pool_size       = 5017850265

innodb_log_file_size           = 524288000

innodb_log_buffer_size         = 8388608

innodb_file_per_table

innodb_open_files            = 2710

innodb_max_dirty_pages_pct     = 30

innodb_flush_method           = O_DIRECT

innodb_flush_log_at_trx_commit = 0

max_allowed_packet

——————————————————————————————————————————————

Lucene Index

 

ZCS creates and maintains a Lucene search index for every mailbox. As messages arrive, they are added to the Lucene index, and Lucene merges these additions frequently (which results in IO). If multiple additions to a mailbox can be performed together in RAM and flushed at once, write load will be lower. ZCS tries to perform this optimization by keeping open a certain number of mailboxes’ index writers (local config zimbra_index_lru_size, default 100), and flushes any open index writers periodically (local config zimbra_index_idle_flush_time, default 10 minutes). Eg, a single mailbox gets two messages in a 10 minute window between flushes, and its index writer was in cache in that time between the two deliveries, then the index update writes to disk less.

However, increasing number of index writers is bad under at least these conditions:

You do not have sufficient RAM to spare for other purposes (mailbox/message caches, mysql)

You have a large number of provisioned mailboxes which receive mail

You frequently send messages to all mailboxes on a single ZCS mailbox node – you blow through index writer cache. Delivering messages to all is one of the peak load times, so you will not have gained any benefit from this optimization in your peak load from this cache.

We have found that setting index writer cache size to more than 500-1000 on 8GB can result in high GC times and/or out of memory errors, depending on your mailbox usage.

If you need to disable the index writer cache entirely (because you are seeing out of memory errors, or you have determined that your message delivery rate is so even across many mailboxes that the cache doesn’t reduce IO), do this:

# need to restart mailbox service for this to take effect

$ zmlocalconfig -e zimbra_index_max_uncommitted_operations=0

The value of 0 for zimbra_index_max_uncommitted_operations overrides any value in zimbra_index_lru_size, ie, 0 uncommitted ops disables the index writer cache use.

See also bug 24074 (too many recipients should bypass index writer cache).

In ZCS 5.0.3 “batched indexing” capability (bug 19235) for Lucene indexes was added. In a future release, this will become the default mode (bug 27913: make batched indexing the only indexing mode). To take advantage of this performance enhancement and reduce overall Lucene I/O on a ZCS system you can use a command similar to the following (Note: this is a per COS setting):

$ for cos in `zmprov gac`; do

[zimbra@mailstore1 libexec]$ zmprov mc $cos zimbraBatchedIndexingSize 20;

done

Other Use full Zimbra value

zimbra_index_deferred_items_failure_delay = 300

zimbra_index_directory = ${zimbra_home}/index

zimbra_index_disable_perf_counters = false

zimbra_index_lucene_avg_doc_per_segment = 10000

zimbra_index_lucene_io_impl = nio

zimbra_index_lucene_max_buffered_docs = 200

zimbra_index_lucene_max_merge = 2147483647

zimbra_index_lucene_max_terms_per_query = 50000

zimbra_index_lucene_merge_factor = 10

zimbra_index_lucene_merge_policy = true

zimbra_index_lucene_min_merge = 1000

zimbra_index_lucene_ram_buffer_size_kb = 10240 (20480)

zimbra_index_lucene_term_index_divisor = 1

zimbra_index_lucene_use_compound_file = true

zimbra_index_max_readers = 35 --(70)

zimbra_index_max_transaction_bytes = 5000000 (10000000)

zimbra_index_max_transaction_items = 100

zimbra_index_max_writers = 100 (200)

zimbra_index_reader_cache_size = 20 --(40)

zimbra_index_reader_cache_sweep_frequency = 30

zimbra_index_reader_cache_ttl = 300

zimbra_index_rfc822address_max_token_count = 512

zimbra_index_rfc822address_max_token_length = 256

zimbra_index_threads = 10 --(20)

zimbra_index_wildcard_max_terms_expanded = 20000

zimbra_require_interprocess_security: The default configuration is zimbra_require_interprocess_security=1, which will force mailboxd to use LDAP STARTTLS for all LDAP queries. This is good for security, but will hurt performance in a large environment. STARTTLS requires more resources/processing, but more importantly – JNDI is inefficient with LDAP STARTTLS connections, because it uses individual new connections for each LDAP request rather than connections out of the LDAP connection pool. As long as your internal network is “trusted”, there is generally no reason to use encrypted LDAP requests, since these requests are only on the internal protected network and not accessible to external users. Setting this to 0 is recommended if acceptable from an internal security perspective. More details on this and related options can be found here: STARTTLS Localconfig Values

[zimbra@mailstore1 libexec]$ zmlocalconfig -e zimbra_require_interprocess_security=0

Curent: zimbra_require_interprocess_security = 1 enable

 

 

———————————————————————————————————————

JVM Options

You can tune the Java virtual machine (JVM) that runs the mailbox server application by making changes to a few local config variables. Java runtimes have an automatically garbage collected heap and ZCS maintains caches in the Java heap, and therefore selecting the garbage collector and adjusting the heap size/options are critical to good performance. ZCS by default tries to provide best settings for your system. However, we strongly recommend that all installations double check their JVM settings based on reviewing and understanding this section.

Local Config Variables

The following local config variables control the options provided to the mailbox server JVM. Changes to these local config variables are preserved across upgrades. For your changes to take effect, you must restart the mailbox service.

  • mailboxd_java_options: Most JVM options, including the type of garbage collector to use, are specified here. Default options included in this local config variable are listed in the next section. Please make sure you have all the ones you should have.
  • mailboxd_thread_stack_size: Should be set to the value 256k. This value is supplied as the parameter to the -Xss JVM option which controls the OS stack size for threads running in the JVM. The default stack size on most systems is unnecessarily high, and given that ZCS mailbox server is highly multi-threaded, a smaller stack size is critical to preventing memory exhaustion because of too many threads. Eg, if there are 3000 threads inside the JVM (see note about IMAP above), you may end up using 750MB just for thread stacks. We have also run tests with 128k stack size in the past, 256k is a conservative recommendation. If you do set a value below 256k, please setup some process to monitor your mailbox.log files for StackOverflowError and adjust the value higher if such errors are found.
  • mailboxd_java_heap_memory_percent: (Deprecated in ZCS 7.0. See mailboxd_java_heap_size below.) This variable determines the percentage of system memory that should be used for Java heap (ie, -Xms and -Xmx JVM option values are derived from this local config variable). The default value is 30% – if you have 8GB of RAM, you will end up with a 2.4GB heap size. It is important to know that the Java process size will be much bigger than the heap size you configure here – the JVM uses memory for other purposes as well, eg, see note about thread stack size above.We strongly recommend against increasing the heap size to more than 30% of system memory. However, there are many situations in which we recommend reducing the Java heap size:
    • If you have limited amount of memory (ie, < 8GB)
    • If you have more than just the mailbox service running on the server (eg, MTA/LDAP)
    • If you have a lot of memory. If you have 32GB, 30% is 9.3GB, reduce heap percent to 20. Beyond a Java heap size of 4-6GB, the system performs better if the memory is assigned to MySQL buffers instead.
  • mailboxd_java_heap_size: New in ZCS 7.0. Number of megabytes to be used as the maximum Java heap size (-Xms and -Xmx) of the JVM running mailboxd. For upgrades from a previous ZCS version, the mailboxd_java_heap_size variable is set according to the mailboxd_java_heap_memory_percent variable. For new installs, the mailboxd_java_help_size variable is set as follows:
    • 25% of system memory for upto 16GB of system memory
    • 20% of system memory for > 16GB of system memory
    • For 32-bit systems with more than 2GB memory, a maximum of 1.5GB is allocated.
  • mailboxd_java_heap_new_size_percent: New in ZCS 6.0. Percentage of Java heap that should be allocated to the young generation of Java heap. Default is 25%. This local config variable is used to determine the value of the -Xmn option of the JVM.

 

 

 

Current Setting:

mailboxd_java_heap_new_size_percent = 25

mailboxd_java_heap_size = 3993

mailboxd_thread_stack_size = 256k

mailboxd_java_options = -server -Djava.awt.headless=true -Dsun.net.inetaddr.ttl=60 -XX:+UseConcMarkSweepGC -XX:PermSize=128m -XX:MaxPermSize=350m -XX:SoftRefLRUPolicyMSPerMB=1 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:-OmitStackTraceInFastThrow -Djava.net.preferIPv4Stack=true
 

------------------------------------------------------------------------------------------------------------------------------------

Max message size

ZCS 5.0 has much better support for larger messages. In earlier versions large messages caused increased memory pressure and we recommend using the default 10MB max message size. Even with ZCS 5.0 do not increase max message size arbitrarily - large messages do caused increased IO load on the system (by nature), and external mail servers will like not accept large messages.

Message cache size

As of ZCS 6.0, the message cache is an in-memory cache that stores the MIME structures of recently-accessed messages. In ZCS 5.0, the message cache stored not just the message structure, but also the content of messages less than 1MB.

This cache speeds up retrieval of message content for mail clients such as Mail.app, which repeatedly access the same message in a short time window.

ZCS 6: on large installs, increase the number entries in the message cache to 10000 (the maximum allowed):

The message cache hit rate is tracked in /opt/zimbra/zmstat/mailboxd.csv in the mbox_msg_cache column
zimbraMessageCacheSize: 2000
zmprov ms `zmhostname` zimbraMessageCacheSize 2500