Netty的非阻塞线程模型如何工作

时间:2019-07-19 13:43:06

标签: java multithreading netty nio

目前,我正在阅读 Tomasz Nurkiewicz 的书《使用RxJava进行响应式编程》。在第5章中,他比较了两种不同的构建HTTP服务器的方法,其中一种基于netty framework

与传统方法(每个请求都有一个线程阻塞IO的经典方法)相比,我不知道使用这种框架如何能帮助构建响应更快的服务器。

主要概念是利用尽可能少的线程,但如果存在某些阻塞的IO操作(如数据库访问),则意味着一次只能处理非常有限的并发连接数

我已经从那本书中复制了一个例子。

初始化服务器:

public static void main(String[] args) throws Exception {
    EventLoopGroup bossGroup = new NioEventLoopGroup(1);
    EventLoopGroup workerGroup = new NioEventLoopGroup();
    try {
        new ServerBootstrap()
                .option(ChannelOption.SO_BACKLOG, 50_000)
                .group(bossGroup, workerGroup)
                .channel(NioServerSocketChannel.class)
                .childHandler(new HttpInitializer())
                .bind(8080)
                .sync()
                .channel()
                .closeFuture()
                .sync();
    } finally {
        bossGroup.shutdownGracefully();
        workerGroup.shutdownGracefully();
    }
}

工作组线程池的大小在我的计算机上为availableProcessors * 2 = 8

为了模拟一些IO operation并能够查看日志中发生了什么,我向处理程序中添加了1sec的延迟时间(但可能是一些业务逻辑调用):

class HttpInitializer extends ChannelInitializer<SocketChannel> {

    private final HttpHandler httpHandler = new HttpHandler();

    @Override
    public void initChannel(SocketChannel ch) {
        ch
                .pipeline()
                .addLast(new HttpServerCodec())
                .addLast(httpHandler);
    }
}

处理程序本身:

class HttpHandler extends ChannelInboundHandlerAdapter {

    private static final Logger log = LoggerFactory.getLogger(HttpHandler.class);

    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) {
        ctx.flush();
    }

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        if (msg instanceof HttpRequest) {
            try {
                System.out.println(format("Request received on thread '%s' from '%s'", Thread.currentThread().getName(), ((NioSocketChannel)ctx.channel()).remoteAddress()));
            } catch (Exception ex) {}
            sendResponse(ctx);
        }
    }

    private void sendResponse(ChannelHandlerContext ctx) {
        final DefaultFullHttpResponse response = new DefaultFullHttpResponse(
                HTTP_1_1,
                HttpResponseStatus.OK,
                Unpooled.wrappedBuffer("OK".getBytes(UTF_8)));
        try {
            TimeUnit.SECONDS.sleep(1);
        } catch (Exception ex) {
            System.out.println("Ex catched " + ex);
        }
        response.headers().add("Content-length", 2);
        ctx.writeAndFlush(response);
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        log.error("Error", cause);
        ctx.close();
    }
}

客户端模拟多个并发连接:

public class NettyClient {

    public static void main(String[] args) throws Exception {
        NettyClient nettyClient = new NettyClient();
        for (int i = 0; i < 100; i++) {
            new Thread(() -> {
                try {
                    nettyClient.startClient();
                } catch (Exception ex) {
                }
            }).start();
        }
        TimeUnit.SECONDS.sleep(5);
    }

    public void startClient()
            throws IOException, InterruptedException {

        InetSocketAddress hostAddress = new InetSocketAddress("localhost", 8080);
        SocketChannel client = SocketChannel.open(hostAddress);

        System.out.println("Client... started");

        String threadName = Thread.currentThread().getName();

        // Send messages to server
        String[] messages = new String[]
                {"GET / HTTP/1.1\n" +
                        "Host: localhost:8080\n" +
                        "Connection: keep-alive\n" +
                        "Cache-Control: max-age=0\n" +
                        "Upgrade-Insecure-Requests: 1\n" +
                        "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36\n" +
                        "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3\n" +
                        "Accept-Encoding: gzip, deflate, br\n" +
                        "Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7"};

        for (int i = 0; i < messages.length; i++) {
            byte[] message = new String(messages[i]).getBytes();
            ByteBuffer buffer = ByteBuffer.wrap(message);
            client.write(buffer);
            System.out.println(messages[i]);
            buffer.clear();
        }
        client.close();
    }
}

预期-IMG

我们的情况是蓝线,唯一的区别是延迟设置为0.1秒,而不是我上面解释的1秒。如图所示,在100个并发连接的情况下,我期望100 RPS是因为90k RPS和100k并发连接的延迟为0.1。

实际-netty一次仅处理8个并发连接,等待睡眠到期,再处理另外8个请求,依此类推。结果,完成所有请求大约花费了13秒。很明显,要处理更多需要分配更多线程的客户端。

但这就是经典的阻塞IO方法的工作原理!这里是服务器端的日志,您可以看到前8个请求已处理,一秒钟后又有8个请求

2019-07-19T12:34:10.791Z Request received on thread 'nioEventLoopGroup-3-2' from '/127.0.0.1:49466'
2019-07-19T12:34:10.791Z Request received on thread 'nioEventLoopGroup-3-1' from '/127.0.0.1:49465'
2019-07-19T12:34:10.792Z Request received on thread 'nioEventLoopGroup-3-8' from '/127.0.0.1:49464'
2019-07-19T12:34:10.793Z Request received on thread 'nioEventLoopGroup-3-7' from '/127.0.0.1:49463'
2019-07-19T12:34:10.799Z Request received on thread 'nioEventLoopGroup-3-6' from '/127.0.0.1:49462'
2019-07-19T12:34:10.802Z Request received on thread 'nioEventLoopGroup-3-3' from '/127.0.0.1:49467'
2019-07-19T12:34:10.802Z Request received on thread 'nioEventLoopGroup-3-4' from '/127.0.0.1:49461'
2019-07-19T12:34:10.803Z Request received on thread 'nioEventLoopGroup-3-5' from '/127.0.0.1:49460'
2019-07-19T12:34:11.798Z Request received on thread 'nioEventLoopGroup-3-8' from '/127.0.0.1:49552'
2019-07-19T12:34:11.798Z Request received on thread 'nioEventLoopGroup-3-1' from '/127.0.0.1:49553'
2019-07-19T12:34:11.799Z Request received on thread 'nioEventLoopGroup-3-2' from '/127.0.0.1:49554'
2019-07-19T12:34:11.801Z Request received on thread 'nioEventLoopGroup-3-6' from '/127.0.0.1:49470'
2019-07-19T12:34:11.802Z Request received on thread 'nioEventLoopGroup-3-3' from '/127.0.0.1:49475'
2019-07-19T12:34:11.805Z Request received on thread 'nioEventLoopGroup-3-7' from '/127.0.0.1:49559'
2019-07-19T12:34:11.805Z Request received on thread 'nioEventLoopGroup-3-4' from '/127.0.0.1:49468'
2019-07-19T12:34:11.806Z Request received on thread 'nioEventLoopGroup-3-5' from '/127.0.0.1:49469'

所以我的问题是-Netty(或类似的东西)及其非阻塞和事件驱动的体系结构如何更有效地利用CPU?如果每个循环组只有1个线程,则流水线如下:

  1. ServerChannel选择键设置为ON_ACCEPT
  2. ServerChannel接受连接,并且ClientChannel选择键设置为ON_READ
  3. 工作线程读取此ClientChannel的内容并传递到处理程序链。
  4. 即使ServerChannel线程接受另一个客户端连接 并将其放入某种队列中,在线程链中的所有处理程序完成工作之前,工作线程无法执行任何操作。从我的 视图线程的观点不能只是切换到另一个工作,因为 即使等待远程数据库的响应也需要CPU滴答声。

1 个答案:

答案 0 :(得分:0)

“ Netty(或类似的东西)及其无阻塞和事件驱动的体系结构如何更有效地利用CPU?”

不能。

当使用任务而不是线程作为并行工作单元时,异步(非阻塞和事件驱动)编程的目标是节省核心内存。这样一来,可以有数百万个并行活动,而不是数千个。

CPU周期不能自动保存-这始终是一项智力工作。

相关问题