-
Notifications
You must be signed in to change notification settings - Fork 673
Description
Summary
After upgrading to Reactor Netty 1.3.0-SNAPSHOT, which includes Netty 4.2.x with full io_uring
support, I observed a massive performance regression when using the io_uring
transport in a minimal hello-world server. Under identical conditions:
nio
andepoll
: ~230K–250K RPSio_uring
: ~29K RPS
Additionally, io_uring
causes the server to occasionally hang during shutdown — it doesn't always reproduce, but happens frequently enough to be notable.
This behavior does not occur in pure Netty 4.2.x or Vert.x 5, where io_uring
gives the expected significant performance boost under the same conditions.
🔬 Benchmark setup
- Tested transports:
nio
,epoll
,io_uring
- Load generator:
wrk
- Clients: 2 threads, 200 connections
- Endpoint:
/hello-world
returning static string - Duration: 10 seconds
- Transport verification:
/thread-name
endpoint returns the current thread name to confirm which event loop (nio, epoll, io_uring) is being used.
Example command
wrk -c200 -t2 -d10s http://127.0.0.1:8080/hello-world
📈 Results
Transport | RPS (Requests/sec) | Notes |
---|---|---|
nio |
~226K – ~250K | Stable performance |
epoll |
~228K – ~245K | Comparable to NIO |
io_uring |
~29K | Severe degradation |
💣 Shutdown issue with io_uring
Sometimes, when stopping the application, the server hangs indefinitely during disposeNow()
. This does not happen with nio
or epoll
. It's intermittent and hard to reproduce 100% of the time, but it happens consistently enough to be concerning.
✅ Environment
Component | Value |
---|---|
Reactor Netty | 1.3.0-SNAPSHOT |
Netty | 4.2.x |
Java | Temurin OpenJDK 21.0.3 (LTS) |
OS Kernel | Linux 6.14.0-23-generic |
Architecture | ARM64 (via linux-aarch_64 ) |
⚙️ Activation of native transports
Only one transport dependency is included at a time to control which one is loaded:
<!-- io_uring -->
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-io_uring</artifactId>
<classifier>linux-aarch_64</classifier>
</dependency>
<!-- epoll (commented out) -->
<!--
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<classifier>linux-aarch_64</classifier>
</dependency>
-->
No manual -Dreactor.netty.transport=...
override is used — the default preferred transport is picked.
🧪 Minimal reproducible server
@Component
class ServerApplicationRunner {
private var server: DisposableServer? = null
@EventListener(ApplicationReadyEvent::class)
fun start() {
thread {
server = HttpServer.create().port(8080)
.runOn(LoopResources.create("my-loop-sources"), true)
.route { routes ->
routes
.get("/thread-name") { _, res ->
res.sendString(Mono.just(Thread.currentThread().name + "\n"))
}
.get("/hello-world") { _, res ->
res.sendString(Mono.just("Hello, World!"))
}
}
.bindNow()
println("Server started")
server?.onDispose()?.block()
}
}
@EventListener(ContextClosedEvent::class)
fun stop() {
println("Stoping server...")
server?.disposeNow()
println("Server stoped")
}
}
🙏 Request
Please investigate whether Reactor Netty’s io_uring
integration introduces unintended overhead or misconfiguration. Since io_uring
performs excellently in other frameworks like pure Netty and Vert.x under the same environment, this seems specific to the Reactor layer.
Let me know if any additional diagnostics or debug logs would help. I’m happy to assist further.