Issue when Undertow used with spring boot app

I have a spring boot app running with default embedded tomcat (with tomcat jdbc connection pooling). It is production and running fine. I am using mysql as my database.

I am now doing some stress testing in my test environment and trying to see if I get any obvious benefit if I switch from embedded Tomcat to embedded Undertow. People claim to get visible improvement in their throughput by doing this due to the asynchronous nature of undertow request handling.

I know how to exclude tomcat and add undertow to boot app. After doing that, I am trying to run my stress-testing script to roughly generate 500 requests per second, run it for 5 minutes under this load and see how it behaves. When I do this, after initial few seconds, I start getting jdbc exceptions as given below intermittently.

 org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection 
     at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:431) ~[spring-orm-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
     at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:373) ~[spring-tx-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
     at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:426) ~[spring-tx-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
     at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:275) ~[spring-tx-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
     at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96) ~[spring-tx-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
     at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
     at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136) ~[spring-tx-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
     at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
     at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodInterceptor.invoke(CrudMethodMetadataPostProcessor.java:133) ~[spring-data-jpa-1.10.2.RELEASE.jar!/:na]
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]

It means jdbc connection can not be acquired.

NOTE: If I remove embedded undertow and again add embedded tomcat to my app, then same test runs fine without any jdbc-connection related exceptions.

My underlying Tomcat-jdbc-pool has 100 db-connection. For undertow, I tried with 100 worker threads and 100 io-threads.

I also tried using HikariCP instead of default tomcat-jdbc-pooling. I tried HikariCP with maximumPoolSize=100 and connectionTimeOut=60000. Again embedded-Tomcat+HikariCP runs fine under this stress-test. But embedded-Undertow+HikariCP gives similar exceptions.

So there is something different happening when I bring in Undertow in the picture. But I am not able to understand it. Please note that these exceptions come intermittently, but come for sure in every run of my stress-test when Undertow is used.

I generally searched for such issues. In general I don’t find such common crib for Undertow.

Any help to analyse the situation will save lot of time.

First of you you might be better off changing things one at a time to reduce potential issues.

Undertow – 100 IO threads is way too many. You should probably stick with the default here which is I believe 1 IO thread per core. IO threads only job is to manage the open connections and handle any non blocking work. JDBC SQL queries are blocking so you need to ensure any endpoint that blocks will dispatch the request to the worker threads. You can use the BlockingHandler for this, I’m not sure how to do it with Spring. Again 100 worker threads might be a bit excessive the default is a lot lower I believe 20-30 range. Make sure this is working correctly with your existing connection pool BEFORE switching to HikariCP. I would suggest just leaving the thread pools at their defaults to start and make sure you are dispatching to the worker threads.

HikariCP – 100 connections is also a lot for HikariCP unless you have tons of very very long running queries. More info about connection pool sizing.

Don’t try changing both at the same time. It will be harder to track down what is going on in that scenario.