具有私有GKE群集的typeProviders

时间:2019-07-02 22:56:33

标签: google-deployment-manager

我是Google Cloud Manager(GCM)的新手,并且正在编写一些代码以进行练习。我读了一些有趣的文章,详细介绍了如何使用Deploymentmanager.v2beta.typeprovider扩展GCM并使用它来配置Kubernetes对象本身作为其他部署。这是一个非常吸引人的扩展行为,似乎为扩展任何很棒的API的声明式自动化提供了巨大的机会。

我正在尝试创建一个私有节点/公共端点GKE集群,该集群由与GKE api调用相对应的自定义typeProvider资源管理。似乎公共节点/公共端点GKE集群是支持GCM自定义typeProvider的唯一方法,考虑到可能进行私有节点/公共端点GKE配置,这似乎是错误的。

似乎没有Deploymentmanager.v2beta.typeprovider支持私有节点/公共端点GKE配置会很奇怪。

顺便说一句... 我觉得将私有节点/私有终结点/云终结点暴露给GCM typeProvider公共API终结点要求也应该是有效的体系结构,但我尚未进行测试。

使用以下代码

def GenerateConfig(context):
    # Some constant type vars that are not really
    resources = []
    outputs = []
    gcp_type_provider = 'deploymentmanager.v2beta.typeProvider'
    extension_prefix = 'k8s'
    api_version = 'v1'
    kube_initial_auth = {
        'username': 'luis',
        'password': 'letmeinporfavors',
        "clientCertificateConfig": {
            'issueClientCertificate': True
        }
    }

    # EXTEND API TO CONTROL KUBERNETES BEGIN
    kubernetes_exposed_apis = [
        {
            'name': '{}-{}-api-v1-type'.format(
                context.env['deployment'],
                extension_prefix
            ),
            'endpoint': 'api/v1'
        },
        {
            'name': '{}-{}-apps-v1-type'.format(
                context.env['deployment'],
                extension_prefix
            ),
            'endpoint': 'apis/apps/v1'
        },
        {
            'name': '{}-{}-rbac-v1-type'.format(
                context.env['deployment'],
                extension_prefix
            ),
            'endpoint': 'apis/rbac.authorization.k8s.io/v1'
        },
        {
            'name': '{}-{}-v1beta1-extensions-type'.format(
                context.env['deployment'],
                extension_prefix
            ),
            'endpoint': 'apis/extensions/v1beta1'
        }
    ]
    for exposed_api in kubernetes_exposed_apis:
        descriptor_url = 'https://{}/swaggerapi/{}'.format(
            '$(ref.{}-k8s-cluster.endpoint)'.format(
                context.env['deployment']
            ),
            exposed_api['endpoint']
        )
        resources.append(
            {
                'name': exposed_api['name'],
                'type': gcp_type_provider,
                'properties': {
                    'options': {
                        'validationOptions': {
                            'schemaValidation': 'IGNORE_WITH_WARNINGS'
                        },
                        'inputMappings': [
                            {
                                'fieldName': 'name',
                                'location': 'PATH',
                                'methodMatch': '^(GET|DELETE|PUT)$',
                                'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'

                            },
                            {
                                'fieldName': 'metadata.name',
                                'location': 'BODY',
                                'methodMatch': '^(PUT|POST)$',
                                'value': '$.ifNull($.resource.properties.metadata.name, $.resource.name)'
                            },
                            {
                                'fieldName': 'Authorization',
                                'location': 'HEADER',
                                'value': '$.concat("Bearer ", $.googleOauth2AccessToken())'
                            }
                        ],
                    },
                    'descriptorUrl': descriptor_url
                },
            }
        )
    # EXTEND API TO CONTROL KUBERNETES END

    # NETWORK DEFINITION BEGIN
    resources.append(
        {
            'name': "{}-network".format(context.env['deployment']),
            'type': "compute.{}.network".format(api_version),
            'properties': {
                'description': "{} network".format(context.env['deployment']),
                'autoCreateSubnetworks': False,
                'routingConfig': {
                    'routingMode': 'REGIONAL'
                }
            },
        }
    )

    resources.append(
        {
            'name': "{}-subnetwork".format(context.env['deployment']),
            'type': "compute.{}.subnetwork".format(api_version),
            'properties': {
                'description': "{} subnetwork".format(
                    context.env['deployment']
                ),
                'network': "$(ref.{}-network.selfLink)".format(
                    context.env['deployment']
                ),
                'ipCidrRange': '10.64.1.0/24',
                'region': 'us-east1',
                'privateIpGoogleAccess': True,
                'enableFlowLogs': False,
            }
        }
    )
    # NETWORK DEFINITION END

    # EKS DEFINITION BEGIN
    resources.append(
        {
            'name': "{}-k8s-cluster".format(context.env['deployment']),
            'type': "container.{}.cluster".format(api_version),
            'properties': {
                'zone': 'us-east1-b',
                'cluster': {
                    'description': "{} kubernetes cluster".format(
                        context.env['deployment']
                    ),
                    'privateClusterConfig': {
                        'enablePrivateNodes': False,
                        'masterIpv4CidrBlock': '10.0.0.0/28'
                    },
                    'ipAllocationPolicy': {
                        'useIpAliases': True
                    },
                    'nodePools': [
                        {
                            'name': "{}-cluster-pool".format(
                                context.env['deployment']
                            ),
                            'initialNodeCount': 1,
                            'config': {
                                'machineType': 'n1-standard-1',
                                'oauthScopes': [
                                    'https://www.googleapis.com/auth/compute',
                                    'https://www.googleapis.com/auth/devstorage.read_only',
                                    'https://www.googleapis.com/auth/logging.write',
                                    'https://www.googleapis.com/auth/monitoring'
                                ],
                            },
                            'management': {
                                'autoUpgrade': False,
                                'autoRepair': True
                            }
                        }],
                    'masterAuth': kube_initial_auth,
                    'loggingService': 'logging.googleapis.com',
                    'monitoringService': 'monitoring.googleapis.com',
                    'network': "$(ref.{}-network.selfLink)".format(
                        context.env['deployment']
                    ),
                    'clusterIpv4Cidr': '10.0.0.0/14',
                    'subnetwork': "$(ref.{}-subnetwork.selfLink)".format(
                        context.env['deployment']
                    ),
                    'enableKubernetesAlpha': False,
                    'resourceLabels': {
                        'purpose': 'expiramentation'
                    },
                    'networkPolicy': {
                        'provider': 'CALICO',
                        'enabled': True
                    },
                    'initialClusterVersion': 'latest',
                    'enableTpu': False,
                }
            }
        }
    )
    outputs.append(
        {
            'name': '{}-cluster-endpoint'.format(
                context.env['deployment']
            ),
            'value': '$(ref.{}-k8s-cluster.endpoint)'.format(
                context.env['deployment']
            ),
        }
    )
    # EKS DEFINITION END

    # bring it all together
    template = {
        'resources': resources,
        'outputs': outputs
    }

    # give it to google
    return template


if __name__ == '__main__':
    GenerateConfig({})

此外,我将注意后面的hello world模板,该模板使用上面创建的typeProviders。

def current_config():
    '''
    get the current configuration
    '''
    return {
        'name': 'atrium'
    }


def GenerateConfig(context):
    resources = []
    conf = current_config()
    resources.append(
        {
            'name': '{}-svc'.format(conf['name']),
            'type': "{}/{}-k8s-api-v1-type:/api/v1/namespaces/{}/pods".format(
                context.env['project'],
                conf['name'],
                '{namespace}'
            ),
            'properties': {
                'namespace': 'default',
                'apiVersion': 'v1',
                'kind': 'Pod',
                'metadata': {
                    'name': 'hello-world',
                },
                'spec': {
                    'restartPolicy': 'Never',
                    'containers': [
                        {
                            'name': 'hello',
                            'image': 'ubuntu:14.04',
                            'command': ['/bin/echo', 'hello', 'world'],
                        }
                    ]
                }
            }
        }
    )

    template = {
        'resources': resources,
    }

    return template


if __name__ == '__main__':
    GenerateConfig({})

如果我将enablePrivateNodes保留为False

                    'privateClusterConfig': {
                        'enablePrivateNodes': False,
                        'masterIpv4CidrBlock': '10.0.0.0/28'
                    }

这是我的回应

~/code/github/gcp/expiramentation/atrium_gcp_infra 24s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14]...done.
Delete operation operation-1562105370163-58cb9ffb0b7b8-7479dd98-275c6b14 completed successfully.
Waiting for delete [operation-1562105393399-58cba01134528-be47dc30-755cb106]...done.
Delete operation operation-1562105393399-58cba01134528-be47dc30-755cb106 completed successfully.
The fingerprint of the deployment is IiWcrdbZA5MedNlJLIicOg==
Waiting for create [operation-1562105786056-58cba187abee2-5d761e87-b446baca]...done.
Create operation operation-1562105786056-58cba187abee2-5d761e87-b446baca completed successfully.
NAME                                TYPE                                   STATE      ERRORS  INTENT
atrium-k8s-api-v1-type              deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-apps-v1-type             deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-cluster                  container.v1.cluster                   COMPLETED  []
atrium-k8s-rbac-v1-type             deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-v1beta1-extensions-type  deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-network                      compute.v1.network                     COMPLETED  []
atrium-subnetwork                   compute.v1.subnetwork                  COMPLETED  []
The fingerprint of the deployment is QJ2NS5EhjemyQJThUWYNHA==
Waiting for create [operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb]...done.
Create operation operation-1562106179055-58cba2fe76fe7-957ef7a6-f55257bb completed successfully.
NAME        TYPE                                                                      STATE      ERRORS  INTENT
atrium-svc  atrium-244423/atrium-k8s-api-v1-type:/api/v1/namespaces/{namespace}/pods  COMPLETED  []

~/code/github/gcp/expiramentation/atrium_gcp_infra 13m 48s

这是一个很好的响应,我的自定义typeProvider资源可以使用我新创建的集群的API正确创建。

如果我让此群集具有私有节点,则...使用

                     'privateClusterConfig': {
                         'enablePrivateNodes': True,
                         'masterIpv4CidrBlock': '10.0.0.0/28'
                     },

我失败了

~/code/github/gcp/expiramentation/atrium_gcp_infra 56s
❯ bash atrium/recycle.sh
Waiting for delete [operation-1562106572016-58cba47538c93-d34c17fc-8b863765]...done.
Delete operation operation-1562106572016-58cba47538c93-d34c17fc-8b863765 completed successfully.
Waiting for delete [operation-1562106592237-58cba4888184f-a5bc3135-4e662eed]...done.
Delete operation operation-1562106592237-58cba4888184f-a5bc3135-4e662eed completed successfully.
The fingerprint of the deployment is dk5nh_u5ZFFvYO-pCXnFBg==
Waiting for create [operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd]...done.
Create operation operation-1562106901442-58cba5af62f25-8b0e380f-3687aebd completed successfully.
NAME                                TYPE                                   STATE      ERRORS  INTENT
atrium-k8s-api-v1-type              deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-apps-v1-type             deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-cluster                  container.v1.cluster                   COMPLETED  []
atrium-k8s-rbac-v1-type             deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-k8s-v1beta1-extensions-type  deploymentmanager.v2beta.typeProvider  COMPLETED  []
atrium-network                      compute.v1.network                     COMPLETED  []
atrium-subnetwork                   compute.v1.subnetwork                  COMPLETED  []
The fingerprint of the deployment is 4RnscwpcYTtS614VXqtjRg==
Waiting for create [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1562107350345-58cba75b7e680-f548a69f-1a85f105]: errors:
- code: ERROR_PROCESSING_REQUEST
  message: 'Error fetching URL https://10.0.0.2:443/api/v1/namespaces/default/pods,
    reason: ERROR_EXCLUDED_IP'

10.0.0.2是我的群集的专用端点。我很难找到一个可以覆盖https://10.0.0.2:443/api/v1/namespaces/default/pods url主机的位置,以便它将尝试与publicEndpoint而不是privateEndpoint联系。

我相信,如果此呼叫要发送到公共端点,它将成功创建。除了有趣的是,descriptorUrl中的typeProvider声明还尝试访问集群的publicEndpoint并成功完成了此操作。但是,尽管有这种指示,但实际的api资源(例如hello world示例)的创建仍尝试与私有终结点接口。

我认为这种行为在某处应该是可替代的,但是我找不到此线索。

我尝试了有效的公共节点配置和无效的私有节点配置

1 个答案:

答案 0 :(得分:0)

因此,我在编写[this](https://github.com/Aahzymandius/k8s-workshops/tree/master/8-live-debugging)时遇到了同样的问题。在尝试调试时,我遇到了2个问题

  1. 最初使用public endpoont来获取各种k8s api和资源的粗俗定义,但是,在该响应中包括API服务器的专用端点,这导致k8s类型尝试使用该ip。

  2. 在进一步尝试调试该错误以及我遇到的一个新错误时,我发现gke 1.14.x或更高版本不支持对k8s api的不安全调用,即使在完全公共集群下,该调用也会导致k8s类型失败

由于新版本的GKE似乎无法实现该功能,因此我停止尝试对其进行调试。虽然我建议在github仓库上报告一个问题。该功能将非常有用

相关问题