Ubuntu上的kubernetes:微服务问题通过领事与其他主机交互

时间:2018-11-27 12:19:54

标签: kubernetes consul minikube micronaut

我已经走了几周了,无法解决以下问题:

此视频摘要: https://www.youtube.com/watch?v=48gb1HBHuC8&t=358s

但是从那以后code itself / scripts have been updated。有各种shell脚本。

编写的微服务应用程序是用Micronauts编写的,如果以书面方式执行它而不经过kubernetes,它似乎可以正常工作。 (所以我们知道它确实有效)

现在尝试通过kubernetes进行工作,我得到以下结果:

kubectl get svc
NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                   AGE
billing                      ClusterIP   10.104.228.223   <none>        8085/TCP                                                                  3h
front                        ClusterIP   10.107.198.62    <none>        8080/TCP                                                                  8m
kafka-service                ClusterIP   None             <none>        9093/TCP                                                                  3h
kind-cheetah-consul-dns      ClusterIP   10.101.52.36     <none>        53/TCP,53/UDP                                                             3h
kind-cheetah-consul-server   ClusterIP   None             <none>        8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP   3h
kind-cheetah-consul-ui       ClusterIP   10.97.158.51     <none>        80/TCP                                                                    3h
kubernetes                   ClusterIP   10.96.0.1        <none>        443/TCP                                                                   3h
mongodb                      ClusterIP   10.104.205.91    <none>        27017/TCP                                                                 3h
react                        ClusterIP   10.106.74.166    <none>        3000/TCP                                                                  3h
stock                        ClusterIP   10.109.203.36    <none>        8083/TCP                                                                  9m
waiter                       ClusterIP   10.107.166.108   <none>        8084/TCP                                                                  3h
zipkin-deployment            NodePort    10.108.102.81    <none>        9411:31919/TCP                                                            3h
zk-cs                        ClusterIP   10.100.139.233   <none>        2181/TCP                                                                  3h
zk-hs                        ClusterIP   None             <none>        2888/TCP,3888/TCP                                                         3h

请注意,服务名称front stock是我们将重点关注的两个名称。

它们被称为front-deploymentstock-deployment作为服务。正如您根据领事所见,IT被重命名:

stock-675d778b7d-bg98c:8083
stock:8083

这些是可解析的名称:在这种情况下,10.109.203.36的IP解析为stock-deployment,该IP现在以下简称为stock:

我们有以下吊舱:

kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
billing-59b66cb85d-24mnz             1/1     Running   13         3h
curl-775f9567b5-vzclh                1/1     Running   2          27m
front-7c6d588fd4-ftk7n               1/1     Running   2          18m
kafka-0                              1/1     Running   13         3h
kind-cheetah-consul-server-0         1/1     Running   4          3h
kind-cheetah-consul-wgwfk            1/1     Running   4          3h
mongodb-744f8f5d4-9mgh2              1/1     Running   4          3h
react-6b7f565d96-h5khb               1/1     Running   4          3h
stock-675d778b7d-bg98c               1/1     Running   2          18m
waiter-584b466754-bzs7s              1/1     Running   13         3h
zipkin-deployment-5bf954f879-tbhdf   1/1     Running   4          3h
zk-0    

如果我跑步:

kubectl attach curl-775f9567b5-vzclh -c curl -i -t
If you don't see a command prompt, try pressing enter.
[ root@curl-775f9567b5-vzclh:/ ]$ nslookup stock
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      stock
Address 1: 10.109.203.36 stock.default.svc.cluster.local
[ root@curl-775f9567b5-vzclh:/ ]$ nslookup front
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      front
Address 1: 10.107.198.62 front.default.svc.cluster.local

如果我跑步:

kubectl exec front-7c6d588fd4-ftk7n -- nslookup stock
nslookup: can't resolve '(null)': Name does not resolve

Name:      stock
Address 1: 10.109.203.36 stock.default.svc.cluster.local


$ kubectl exec stock-675d778b7d-bg98c -- nslookup front
nslookup: can't resolve '(null)': Name does not resolve

Name:      front
Address 1: 10.107.198.62 front.default.svc.cluster.local

使用这些方法中的任何一种,DNS似乎都可以正常工作。

如果我跑步

minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ curl 10.109.203.36:8083/stock/lookup/Budweiser
{"name":"Budweiser","bottles":1000,"barrels":2.0,"availablePints":654.636}$ 

问题是这样的:

 curl 10.107.198.62:8080/lookup/Budweiser
{"message":"Internal Server Error: The source Publisher is empty"}$ 
$ 

上面的curl正在调用beer-front application GatewayControllerstockControllerClient.find方法查找:依次调用StockController in beer-stock application

@Get("/lookup/{name}")
@ContinueSpan
public Maybe<BeerStock> lookup(@SpanTag("gateway.beerLookup") @NotBlank String name) {
    System.out.println("Looking up beer for "+name+" "+new Date());
    return stockControllerClient.find(name)
            .onErrorReturnItem(new BeerStock());
}

我知道它会尝试致电客户

 kubectl logs front-7c6d588fd4-ftk7n
11:54:27.629 [main] INFO  i.m.context.env.DefaultEnvironment - Established active environments: [cloud, k8s]
11:54:31.662 [main] INFO  io.micronaut.runtime.Micronaut - Startup completed in 4023ms. Server Running: http://front-7c6d588fd4-ftk7n:8080
11:54:32.168 [nioEventLoopGroup-1-3] INFO  i.m.d.registration.AutoRegistration - Registered service [gateway] with Consul
Looking up beer for Budweiser Tue Nov 27 12:13:38 GMT 2018
12:13:38.851 [nioEventLoopGroup-1-14] ERROR i.m.h.s.netty.RoutingInBoundHandler - Unexpected error occurred: The source Publisher is empty
java.util.NoSuchElementException: The source Publisher is empty

但是,实际的客户端方法似乎都无法访问远程服务。

主要问题是我不确定HTTPClient不能连接到远程服务的哪一部分出了问题。虽然领事配置不正确,但实际的应用程序却无法针对其进行注册,也无法启动。

版本:

 kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}


 $ helm version
    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}


$ minikube version
minikube version: v0.30.0

以下到本地主机的端口转发:

ps auwx|grep kubectl
xxx       6916  0.0  0.1  50584  9952 pts/4    Sl   11:51   0:00 kubectl port-forward kind-cheetah-consul-server-0 8500:8500
xxx       7332  0.0  0.1  49524  9936 pts/4    Sl   11:52   0:00 kubectl port-forward react-6b7f565d96-h5khb 3000:3000
xxx       8704  0.0  0.1  49524  9644 pts/4    Sl   11:55   0:00 kubectl port-forward front-7c6d588fd4-ftk7n 8080:8080

有意思的是,我启用了http客户端跟踪,并找到了前端应用程序的当前IP:8080 / stock,这是生成的日志:

 09:34:27.929 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.discovery.event.ServiceStartedEvent] of candidate Definition: io.micronaut.health.HeartbeatTask
09:34:27.929 [pool-1-thread-1] TRACE i.m.context.DefaultBeanContext - Existing bean io.micronaut.health.HeartbeatTask@363a3d15 does not match qualifier <HeartbeatEvent> for type io.micronaut.context.event.ApplicationEventListener
09:34:27.929 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.server.event.ServerStartupEvent] of candidate Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList
09:34:27.929 [pool-1-thread-1] TRACE i.m.context.DefaultBeanContext - Existing bean io.micronaut.discovery.consul.ConsulServiceInstanceList@5d01ea21 does not match qualifier <HeartbeatEvent> for type io.micronaut.context.event.ApplicationEventListener
09:34:27.929 [pool-1-thread-1] DEBUG i.m.context.DefaultBeanContext - Qualifying bean [io.micronaut.context.event.ApplicationEventListener] from candidates [Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList, Definition: io.micronaut.discovery.consul.registration.ConsulAutoRegistration, Definition: io.micronaut.http.client.scope.ClientScope, Definition: io.micronaut.health.HeartbeatTask, Definition: io.micronaut.runtime.context.scope.refresh.RefreshScope] for qualifier: <HeartbeatEvent> 
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.server.event.ServerStartupEvent] of candidate Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.context.scope.refresh.RefreshEvent] of candidate Definition: io.micronaut.http.client.scope.ClientScope
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.discovery.event.ServiceStartedEvent] of candidate Definition: io.micronaut.health.HeartbeatTask
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.context.scope.refresh.RefreshEvent] of candidate Definition: io.micronaut.runtime.context.scope.refresh.RefreshScope
09:34:27.930 [pool-1-thread-1] DEBUG i.m.context.DefaultBeanContext - Found 1 beans for type [<HeartbeatEvent> io.micronaut.context.event.ApplicationEventListener]: [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] 
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.ApplicationEventPublisher - Established event listeners [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] for event: io.micronaut.health.HeartbeatEvent[source=io.micronaut.http.server.netty.NettyEmbeddedServerInstance@3f1ddac2]
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.ApplicationEventPublisher - Invoking event listener [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] for event: io.micronaut.health.HeartbeatEvent[source=io.micronaut.http.server.netty.NettyEmbeddedServerInstance@3f1ddac2]
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.PropertySourcePropertyResolver - No value found for property: vcap.application.instance_id
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Publisher pass(String checkId,String note)] invocation on target: io.micronaut.discovery.consul.client.v1.AbstractConsulClient$Intercepted@47b179d7
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: Publisher pass(String checkId,String note)
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: Publisher pass(String checkId,String note)
09:34:27.938 [nioEventLoopGroup-1-4] DEBUG i.m.d.registration.AutoRegistration - Successfully reported passing state to Consul
09:34:30.602 [nioEventLoopGroup-1-12] DEBUG i.m.h.server.netty.NettyHttpServer - Server waiter-7dd7998f77-bfkbt:8084 Received Request: GET /waiter/beer/a
09:34:30.602 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Matching route GET - /waiter/beer/a
09:34:30.604 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Matched route GET - /waiter/beer/a to controller class micronaut.demo.beer.$WaiterControllerDefinition$Intercepted
09:34:30.606 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Single serveBeerToCustomer(String customerName)] invocation on target: micronaut.demo.beer.$WaiterControllerDefinition$Intercepted@a624fe7
09:34:30.606 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.validation.ValidatingInterceptor@6642e95d] in chain for method invocation: Single serveBeerToCustomer(String customerName)
09:34:30.607 [nioEventLoopGroup-1-12] TRACE o.h.v.i.e.c.SimpleConstraintTree - Validating value a against constraint defined by ConstraintDescriptorImpl{annotation=j.v.c.NotBlank, payloads=[], hasComposingConstraints=true, isReportAsSingleInvalidConstraint=false, elementType=PARAMETER, definedOn=DEFINED_IN_HIERARCHY, groups=[interface javax.validation.groups.Default], attributes={groups=[Ljava.lang.Class;@71cccd2d, message={javax.validation.constraints.NotBlank.message}, payload=[Ljava.lang.Class;@5044372c}, constraintType=GENERIC, valueUnwrapping=DEFAULT}.
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.aop.chain.InterceptorChain$$Lambda$449/1045761764@6d4672c0] in chain for method invocation: Single serveBeerToCustomer(String customerName)
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)] invocation on target: micronaut.demo.beer.client.TicketControllerClient$Intercepted@eaba75d
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.609 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Flowable getInstances(String serviceId)] invocation on target: compositeDiscoveryClient(consul,kubernetes)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.cache.interceptor.CacheInterceptor@2b772100] in chain for method invocation: Flowable getInstances(String serviceId)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.aop.chain.InterceptorChain$$Lambda$449/1045761764@19a66abd] in chain for method invocation: Flowable getInstances(String serviceId)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)] invocation on target: io.micronaut.discovery.consul.client.v1.AbstractConsulClient$Intercepted@47b179d7
09:34:30.611 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)
09:34:30.611 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)
09:34:30.691 [nioEventLoopGroup-1-12] ERROR i.m.r.intercept.RecoveryInterceptor - Type [micronaut.demo.beer.client.TicketControllerClient$Intercepted] executed with error: Empty body
io.micronaut.http.client.exceptions.HttpClientResponseException: Empty body
    at io.micronaut.http.client.HttpClient.lambda$null$0(HttpClient.java:161)
    at java.util.Optional.orElseThrow(Optional.java:290)
    at io.micronaut.http.client.HttpClient.lambda$retrieve$1(HttpClient.java:161)
    at io.micronaut.core.async.publisher.Publishers$1.doOnNext(Publishers.java:143)
    at io.micronaut.core.async.subscriber.CompletionAwareSubscriber.onNext(CompletionAwareSubscriber.java:53)
    at io.reactivex.internal.util.HalfSerializer.onNext(HalfSerializer.java:45)
    at io.reactivex.internal.subscribers.StrictSubscriber.onNext(StrictSubscriber.java:97)
    at io.reactivex.internal.operators.flowable.FlowableSwitchMap$SwitchMapSubscriber.drain(FlowableSwitchMap.java:307)
    at io.reactivex.internal.operators.flowable.FlowableSwitchMap$SwitchMapInnerSubscriber.onNext(FlowableSwitchMap.java:391)
    at io.reactivex.internal.operators.flowable.FlowableSubscribeOn$SubscribeOnSubscriber.onNext(FlowableSubscribeOn.java:97)
    at io.reactivex.internal.operators.flowable.FlowableOnErrorNext$OnErrorNextSubscriber.onNext(FlowableOnErrorNext.java:79)
    at io.reactivex.internal.operators.flowable.FlowableTimeoutTimed$TimeoutSubscriber.onNext(FlowableTimeoutTimed.java:99)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.lambda$onNext$1(ClientServerRequestTracingPublisher.java:60)
    at io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:53)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.onNext(ClientServerRequestTracingPublisher.java:60)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.onNext(ClientServerRequestTracingPublisher.java:52)
    at io.reactivex.internal.util.HalfSerializer.onNext(HalfSerializer.java:45)
    at io.reactivex.internal.subscribers.StrictSubscriber.onNext(StrictSubscriber.java:97)
    at io.reactivex.internal.operators.flowable.FlowableCreate$NoOverflowBaseAsyncEmitter.onNext(FlowableCreate.java:403)
    at io.micronaut.http.client.DefaultHttpClient$10.channelRead0(DefaultHttpClient.java:1773)
    at io.micronaut.http.client.DefaultHttpClient$10.channelRead0(DefaultHttpClient.java:1705)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.micronaut.http.netty.stream.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:186)
    at io.micronaut.http.netty.stream.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:181)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
    at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
    at io.micronaut.tracing.instrument.util.TracingRunnable.run(TracingRunnable.java:54)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(Thread.java:748)
09:34:30.692 [nioEventLoopGroup-1-12] DEBUG i.m.r.intercept.RecoveryInterceptor - Type [micronaut.demo.beer.client.TicketControllerClient$Intercepted] resolved fallback: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.692 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Looking up existing bean for key: @Fallback micronaut.demo.beer.client.TicketControllerClient
09:34:30.692 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - No existing bean found for bean key: @Fallback micronaut.demo.beer.client.TicketControllerClient
09:34:30.693 [nioEventLoopGroup-1-12] DEBUG i.m.context.DefaultBeanContext - Resolving beans for type: <RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor 
09:34:30.693 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Looking up existing beans for key: <RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor
09:34:30.693 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Found 2 existing beans for type [<RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor]: [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc, io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] 
09:34:30.694 [nioEventLoopGroup-1-12] DEBUG i.m.context.DefaultBeanContext - Created bean [micronaut.demo.beer.client.NoCostTicket$Intercepted@77053015] from definition [Definition: micronaut.demo.beer.client.NoCostTicket$Intercepted] with qualifier [@Fallback]
 Blank beer from fall back being served
09:34:30.695 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Encoding emitted response object [micronaut.demo.beer.Beer@5caca659] using codec: io.micronaut.jackson.codec.JsonMediaTypeCodec@2ba33e2c

任何帮助将不胜感激。该项目链接位于上面的链接中,其中包含各种shell脚本,要完全安装并运行它非常复杂,因此也许观看视频片刻可能更为实用。

更新 我基本上没有这样做,但是我真的无法继续,目前已升级到最新的领事头盔v0.5.0和micronaut 1.0.4 但仍然面临相同的问题,不确定这是否正常:

09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.PropertySourcePropertyResolver - No value found for property: vcap.application.instance_id

我最终制作了一个非常基本的基于2应用的版本on this branch

有更新的更完整日志-在此处找到-运行./install-minikube.sh之后的全新安装(如果要为其他人运行,此脚本将需要更改docker用户名)logs produced

1 个答案:

答案 0 :(得分:0)

看起来您的啤酒前线无法连接到领事。定义为无头服务。您会注意到,kind-cheetah-consul-server没有ClusterIP。您可以尝试直接以“ kind-cheetah-consul-server-0。[headless service fqdn]”或仅以“ kind-cheetah-consul-server-0”进行连接。由于您的领事使用的是statefulset,因此您将拥有一个稳定的pod名称和dns。