Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support OpenAI embedding. #1542

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Support OpenAI embedding. #1542

wants to merge 4 commits into from

Conversation

duxin40
Copy link

@duxin40 duxin40 commented Nov 25, 2024

Ⅰ. Describe what this PR did

This PR is to support openai embedding.

Ⅱ. Does this pull request fix one issue?

Fix #1443.

Ⅲ. Why don't you add test cases (unit test/integration test)?

Ⅳ. Describe how to verify it

  1. build ai-proxy:
cd ./higress/plugins/wasm-go/extensions/ai-proxy

tinygo build -o ai.wasm -scheduler=none -target=wasi -gc=custom -tags="custommalloc nottinygc_finalizer proxy_wasm_version_0_2_100" ./
  1. build ai-cache
cd ./higress/plugins/wasm-go/extensions/ai-cache

tinygo build -o main.wasm -scheduler=none -target=wasi -gc=custom -tags="custommalloc nottinygc_finalizer proxy_wasm_version_0_2_100" ./
  1. start higress

docker-compose.yaml:

services:
  envoy:
    image: higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/gateway:v2.0.2
    entrypoint: /usr/local/bin/envoy
    command: -c /etc/envoy/envoy.yaml --component-log-level wasm:debug
    networks:
      - wasmtest
    ports:
      - "10002:10002"
    volumes:
      - ./envoy.yaml:/etc/envoy/envoy.yaml
      - ./main.wasm:/etc/envoy/main.wasm
      - ../ai-proxy/ai.wasm:/etc/envoy/ai.wasm

networks:
  wasmtest: {}

envoy.yaml:

admin:
  address:
    socket_address:
      protocol: TCP
      address: 0.0.0.0
      port_value: 9901
static_resources:
  listeners:
    - name: listener_0
      address:
        socket_address:
          protocol: TCP
          address: 0.0.0.0
          port_value: 10002
      filter_chains:
        - filters:
            - name: envoy.filters.network.http_connection_manager
              typed_config:
                "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
                scheme_header_transformation:
                  scheme_to_overwrite: https
                stat_prefix: ingress_http
                # Output envoy logs to stdout
                access_log:
                  - name: envoy.access_loggers.stdout
                    typed_config:
                      "@type": type.googleapis.com/envoy.extensions.access_loggers.stream.v3.StdoutAccessLog
                # Modify as required
                route_config:
                  name: local_route
                  virtual_hosts:
                    - name: local_service
                      domains: [ "*" ]
                      routes:
                        - match:
                            prefix: "/"
                          route:
                            cluster: openai
                            timeout: 300s
                http_filters:
                  - name: wasmtest
                    typed_config:
                      "@type": type.googleapis.com/udpa.type.v1.TypedStruct
                      type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
                      value:
                        config:
                          name: wasmtest
                          vm_config:
                            runtime: envoy.wasm.runtime.v8
                            code:
                              local:
                                filename: /etc/envoy/ai.wasm
                          configuration:
                            "@type": "type.googleapis.com/google.protobuf.StringValue"
                            value: |
                              {
                                "provider": {
                                  "type": "openai",
                                  "apiTokens": [
                                    "sk-"
                                  ]
                                }
                              }

                  - name: cache
                    typed_config:
                      "@type": type.googleapis.com/udpa.type.v1.TypedStruct
                      type_url: type.googleapis.com/envoy.extensions.filters.http.wasm.v3.Wasm
                      value:
                        config:
                          name: cache
                          vm_config:
                            runtime: envoy.wasm.runtime.v8
                            code:
                              local:
                                filename: /etc/envoy/main.wasm
                          configuration:
                            "@type": "type.googleapis.com/google.protobuf.StringValue"
                            value: |
                              {
                                "embedding": {
                                  "type": "openai",
                                  "serviceName": "openai.dns",
                                  "apiKey": "sk-"
                                },
                                "vector": {
                                  "type": "dashvector",
                                  "serviceName": "dashvector.dns",
                                  "collectionID": "test1",
                                  "serviceHost": "your host",
                                  "apiKey": "your key",
                                  "threshold": 0.4
                                },
                                "cache": {
                                  "serviceName": "",
                                  "type": ""
                                }
                              }
                  - name: envoy.filters.http.router
                    typed_config:
                      "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
  clusters:
    - name: openai
      connect_timeout: 30s
      type: LOGICAL_DNS
      dns_lookup_family: V4_ONLY
      lb_policy: ROUND_ROBIN
      load_assignment:
        cluster_name: openai
        endpoints:
          - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: api.openai.com
                      port_value: 443
      transport_socket:
        name: envoy.transport_sockets.tls
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
          "sni": "api.openai.com"

    - name: outbound|443||openai.dns
      connect_timeout: 30s
      type: LOGICAL_DNS
      dns_lookup_family: V4_ONLY
      lb_policy: ROUND_ROBIN
      load_assignment:
        cluster_name: outbound|443||openai.dns
        endpoints:
          - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: api.openai.com
                      port_value: 443
      transport_socket:
        name: envoy.transport_sockets.tls
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
          "sni": "api.openai.com"

    - name: outbound|443||dashvector.dns
      connect_timeout: 30s
      type: LOGICAL_DNS
      dns_lookup_family: V4_ONLY
      lb_policy: ROUND_ROBIN
      load_assignment:
        cluster_name: outbound|443||dashvector.dns
        endpoints:
          - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: vrs-*aliyuncs.com
                      port_value: 443
      transport_socket:
        name: envoy.transport_sockets.tls
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
          "sni": "vrs-*aliyuncs.com"
  1. curl test
    request:
curl "http://localhost:10002/ai/v1/chat/completions"  -H "Content-Type: application/json"  -H "Authorization: Bearer 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" -d '{
  "model": "gpt-3.5-turbo",
  "messages": [
    {
      "role": "user",
      "content": "你好,你是谁?"
    }
  ]
}'

embedding response log:
image

Ⅴ. Special notes for reviews

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


duxin40 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@duxin40 duxin40 changed the title Support openai embedding. Support OpenAI embedding. Nov 25, 2024
@duxin40
Copy link
Author

duxin40 commented Nov 26, 2024

@CH3CHO 辛苦有时间帮忙review下~

@CH3CHO
Copy link
Collaborator

CH3CHO commented Nov 26, 2024

@CH3CHO 辛苦有时间帮忙review下~

好的。预计今天能给出 review 意见。另外麻烦先签署一下 CLA。谢谢!

Copy link
Collaborator

@CH3CHO CH3CHO left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AI 缓存插件对接 OpenAI https://platform.openai.com/docs/guides/embeddings
3 participants