1、安装Docker

# 安装Docker
https://docs.docker.com/get-docker/# 安装Docker Compose
https://docs.docker.com/compose/install/# CentOS安装Docker
https://mp.weixin.qq.com/s/nHNPbCmdQs3E5x1QBP-ueA

2、安装Coze Studio

详见:https://github.com/coze-dev/coze-studio/blob/main/README.zh_CN.md

安装要求:

创建目录:

mkdir coze-studio

切换目录:

cd coze-studio

下载:

wget https://github.com/coze-dev/coze-studio/archive/refs/tags/v0.2.4.tar.gz

解压:

tar -xf v0.2.4.tar.gz

切换目录:

cd coze-studio-0.2.4/

模板文件目录:

详见:https://github.com/coze-dev/coze-studio/tree/main/backend/conf/model/template

复制模型配置模板:

# 请根据实际情况选择对应的模型配置模板,当前选择gemini
cp backend/conf/model/template/model_template_gemini.yaml \backend/conf/model/gemini.yaml# 请根据实际情况选择对应的模型配置模板,当前选择deepseek
cp backend/conf/model/template/model_template_deepseek.yaml \backend/conf/model/deepseek.yaml

查看模型配置模版:

id: 67010
name: Gemini-2.5-Flash
icon_uri: default_icon/gemini_v2.png
icon_url: ""
description:zh: gemini 模型简介en: gemini model description
default_parameters:- name: temperaturelabel:zh: 生成随机性en: Temperaturedesc:zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性,反之,降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'type: floatmin: "0"max: "1"default_val:balance: "0.8"creative: "1"default_val: "1.0"precise: "0.3"precision: 1options: []style:widget: sliderlabel:zh: 生成多样性en: Generation diversity- name: max_tokenslabel:zh: 最大回复长度en: Response max lengthdesc:zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.type: intmin: "1"max: "4096"default_val:default_val: "4096"options: []style:widget: sliderlabel:zh: 输入及输出设置en: Input and output settings- name: top_plabel:zh: Top Pen: Top Pdesc:zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择,直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'type: floatmin: "0"max: "1"default_val:default_val: "0.7"precision: 2options: []style:widget: sliderlabel:zh: 生成多样性en: Generation diversity- name: response_formatlabel:zh: 输出格式en: Response formatdesc:zh: '- **文本**: 使用普通文本格式回复\n- **JSON**: 将引导模型使用JSON格式输出'en: '**Response Format**:\n\n- **JSON**: Uses JSON format for replies'type: intmin: ""max: ""default_val:default_val: "0"options:- label: Textvalue: "0"- label: JSONvalue: "2"style:widget: radio_buttonslabel:zh: 输入及输出设置en: Input and output settings
meta:protocol: geminicapability:function_call: trueinput_modal:- text- image- audio- videoinput_tokens: 1048576json_mode: truemax_tokens: 1114112output_modal:- textoutput_tokens: 65536prefix_caching: truereasoning: trueprefill_response: trueconn_config:base_url: "https://generativelanguage.googleapis.com/"api_key: ""timeout: 0smodel: gemini-2.5-flashtemperature: 0.7frequency_penalty: 0presence_penalty: 0max_tokens: 4096top_p: 1top_k: 0stop: []gemini:backend: 0project: ""location: ""api_version: ""headers:key_1:- val_1- val_2timeout_ms: 0include_thoughts: truethinking_budget: nullcustom: {}status: 0
id: 66010
name: DeepSeek-V3
icon_uri: default_icon/deepseek_v2.png
icon_url: ""
description:zh: deepseek 模型简介en: deepseek model description
default_parameters:- name: temperaturelabel:zh: 生成随机性en: Temperaturedesc:zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性,反之,降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'type: floatmin: "0"max: "1"default_val:balance: "0.8"creative: "1"default_val: "1.0"precise: "0.3"precision: 1options: []style:widget: sliderlabel:zh: 生成随机性en: Generation diversity- name: max_tokenslabel:zh: 最大回复长度en: Response max lengthdesc:zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.type: intmin: "1"max: "4096"default_val:default_val: "4096"options: []style:widget: sliderlabel:zh: 输入及输出设置en: Input and output settings- name: response_formatlabel:zh: 输出格式en: Response formatdesc:zh: '- **文本**: 使用普通文本格式回复\n- **JSON**: 将引导模型使用JSON格式输出'en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'type: intmin: ""max: ""default_val:default_val: "0"options:- label: Textvalue: "0"- label: JSON Objectvalue: "1"style:widget: radio_buttonslabel:zh: 输入及输出设置en: Input and output settings
meta:protocol: deepseekcapability:function_call: falseinput_modal:- textinput_tokens: 128000json_mode: falsemax_tokens: 128000output_modal:- textoutput_tokens: 16384prefix_caching: falsereasoning: falseprefill_response: falseconn_config:base_url: "https://api.deepseek.com"api_key: "sk-89fb15811b6944e09cfc2fe257274a18"timeout: 0smodel: "DeepSeek-R1-0528"temperature: 0.7frequency_penalty: 0presence_penalty: 0max_tokens: 4096top_p: 1top_k: 0stop: []deepseek:response_format_type: textcustom: {}status: 0

修改模型配置模版:

说明:
设置id、meta.conn_config.api_key、meta.conn_config.model字段# 获取gemini的API key地址:
https://aistudio.google.com/apikey# 获取deepseek的API key地址:
https://platform.deepseek.com/api_keys
https://api-docs.deepseek.com/zh-cn/

切换目录:

cd docker

复制.env.example文件:

cp .env.example .env

查看.env文件:

详见:https://github.com/coze-dev/coze-studio/blob/main/docker/.env.example
# Server
export LISTEN_ADDR=":8888"
export LOG_LEVEL="debug"
export MAX_REQUEST_BODY_SIZE=1073741824
export SERVER_HOST="http://localhost${LISTEN_ADDR}"
export MINIO_PROXY_ENDPOINT=""
export USE_SSL="0"
export SSL_CERT_FILE=""
export SSL_KEY_FILE=""# MySQL
export MYSQL_ROOT_PASSWORD=root
export MYSQL_DATABASE=opencoze
export MYSQL_USER=coze
export MYSQL_PASSWORD=coze123
export MYSQL_HOST=mysql
export MYSQL_PORT=3306
export MYSQL_DSN="${MYSQL_USER}:${MYSQL_PASSWORD}@tcp(${MYSQL_HOST}:${MYSQL_PORT})/${MYSQL_DATABASE}?charset=utf8mb4&parseTime=True"
export ATLAS_URL="mysql://${MYSQL_USER}:${MYSQL_PASSWORD}@${MYSQL_HOST}:${MYSQL_PORT}/${MYSQL_DATABASE}?charset=utf8mb4&parseTime=True"# Redis
export REDIS_AOF_ENABLED=no
export REDIS_IO_THREADS=4
export ALLOW_EMPTY_PASSWORD=yes
export REDIS_ADDR="redis:6379"
export REDIS_PASSWORD=""# This Upload component used in Agent / workflow File/Image With LLM  , support the component of imagex / storage
# default: storage, use the settings of storage component
# if imagex, you must finish the configuration of <VolcEngine ImageX> 
export FILE_UPLOAD_COMPONENT_TYPE="storage"# VolcEngine ImageX
export VE_IMAGEX_AK=""
export VE_IMAGEX_SK=""
export VE_IMAGEX_SERVER_ID=""
export VE_IMAGEX_DOMAIN=""
export VE_IMAGEX_TEMPLATE=""
export VE_IMAGEX_UPLOAD_HOST="https://imagex.volcengineapi.com"# Storage component 
export STORAGE_TYPE="minio" # minio / tos / s3
export STORAGE_UPLOAD_HTTP_SCHEME="http" # http / https. If coze studio website is https, you must set it to https
export STORAGE_BUCKET="opencoze"
# MiniIO
export MINIO_ROOT_USER=minioadmin
export MINIO_ROOT_PASSWORD=minioadmin123
export MINIO_DEFAULT_BUCKETS=milvus
export MINIO_AK=$MINIO_ROOT_USER
export MINIO_SK=$MINIO_ROOT_PASSWORD
export MINIO_ENDPOINT="minio:9000"
export MINIO_API_HOST="http://${MINIO_ENDPOINT}"# TOS
export TOS_ACCESS_KEY=
export TOS_SECRET_KEY=
export TOS_ENDPOINT=https://tos-cn-beijing.volces.com
export TOS_BUCKET_ENDPOINT=https://opencoze.tos-cn-beijing.volces.com
export TOS_REGION=cn-beijing# S3
export S3_ACCESS_KEY=
export S3_SECRET_KEY=
export S3_ENDPOINT=
export S3_BUCKET_ENDPOINT=
export S3_REGION=# Elasticsearch
export ES_ADDR="http://elasticsearch:9200"
export ES_VERSION="v8"
export ES_USERNAME=""
export ES_PASSWORD=""export COZE_MQ_TYPE="nsq" # nsq / kafka / rmq
export MQ_NAME_SERVER="nsqd:4150"
# RocketMQ
export RMQ_ACCESS_KEY=""
export RMQ_SECRET_KEY=""# Settings for VectorStore
# VectorStore type: milvus / vikingdb
# If you want to use vikingdb, you need to set up the vikingdb configuration.
export VECTOR_STORE_TYPE="milvus"
# milvus vector store
export MILVUS_ADDR="milvus:19530"
# vikingdb vector store for Volcengine
export VIKING_DB_HOST=""
export VIKING_DB_REGION=""
export VIKING_DB_AK=""
export VIKING_DB_SK=""
export VIKING_DB_SCHEME=""
export VIKING_DB_MODEL_NAME="" # if vikingdb model name is not set, you need to set Embedding settings# Settings for Embedding
# The Embedding model relied on by knowledge base vectorization does not need to be configured
# if the vector database comes with built-in Embedding functionality (such as VikingDB). Currently,
# Coze Studio supports four access methods: openai, ark, ollama, and custom http. Users can simply choose one of them when using
# embedding type: openai / ark / ollama / http
export EMBEDDING_TYPE="ark"
export EMBEDDING_MAX_BATCH_SIZE=100# openai embedding
export OPENAI_EMBEDDING_BASE_URL=""       # (string, required) OpenAI embedding base_url
export OPENAI_EMBEDDING_MODEL=""          # (string, required) OpenAI embedding model
export OPENAI_EMBEDDING_API_KEY=""        # (string, required) OpenAI embedding api_key
export OPENAI_EMBEDDING_BY_AZURE=false    # (bool,   optional) OpenAI embedding by_azure
export OPENAI_EMBEDDING_API_VERSION=""    # (string, optional) OpenAI embedding azure api version
export OPENAI_EMBEDDING_DIMS=1024         # (int,    required) OpenAI embedding dimensions
export OPENAI_EMBEDDING_REQUEST_DIMS=1024 # (int,    optional) OpenAI embedding dimensions in requests, need to be empty if api doesn't support specifying dimensions.# ark embedding by volcengine / byteplus
export ARK_EMBEDDING_MODEL=""    # (string, required) Ark embedding model
export ARK_EMBEDDING_API_KEY=""  # (string, required) Ark embedding api_key
export ARK_EMBEDDING_DIMS="2048" # (int,    required) Ark embedding dimensions
export ARK_EMBEDDING_BASE_URL="" # (string, required) Ark embedding base_url
export ARK_EMBEDDING_API_TYPE="" # (string, optional) Ark embedding api type, should be "text_api" / "multi_modal_api". Default "text_api".# ollama embedding
export OLLAMA_EMBEDDING_BASE_URL="" # (string, required) Ollama embedding base_url
export OLLAMA_EMBEDDING_MODEL=""    # (string, required) Ollama embedding model
export OLLAMA_EMBEDDING_DIMS=""     # (int,    required) Ollama embedding dimensions# http embedding
export HTTP_EMBEDDING_ADDR=""   # (string, required) http embedding address
export HTTP_EMBEDDING_DIMS=1024 # (string, required) http embedding dimensions# Settings for OCR
# If you want to use the OCR-related functions in the knowledge base feature,You need to set up the OCR configuration.
# Currently, Coze Studio has built-in Volcano OCR.
# Supported OCR types: `ve`, `paddleocr`
export OCR_TYPE="ve"
# ve ocr
export VE_OCR_AK=""
export VE_OCR_SK=""
# paddleocr ocr
export PADDLEOCR_OCR_API_URL=""# Settings for Model
# Model for agent & workflow
# add suffix number to add different models
export MODEL_PROTOCOL_0="ark"       # protocol
export MODEL_OPENCOZE_ID_0="100001" # id for record
export MODEL_NAME_0=""              # model name for show
export MODEL_ID_0=""                # model name for connection
export MODEL_API_KEY_0=""           # model api key
export MODEL_BASE_URL_0=""           # model base url# Model for knowledge nl2sql, messages2query (rewrite), image annotation, workflow knowledge recall
# add prefix to assign specific model, downgrade to default config when prefix is not configured:
# 1. nl2sql:                    NL2SQL_ (e.g. NL2SQL_BUILTIN_CM_TYPE)
# 2. messages2query:            M2Q_    (e.g. M2Q_BUILTIN_CM_TYPE)
# 3. image annotation:          IA_     (e.g. IA_BUILTIN_CM_TYPE)
# 4. workflow knowledge recall: WKR_    (e.g. WKR_BUILTIN_CM_TYPE)
# supported chat model type: openai / ark / deepseek / ollama / qwen / gemini
export BUILTIN_CM_TYPE="ark"
# type openai
export BUILTIN_CM_OPENAI_BASE_URL=""
export BUILTIN_CM_OPENAI_API_KEY=""
export BUILTIN_CM_OPENAI_BY_AZURE=false
export BUILTIN_CM_OPENAI_MODEL=""# type ark
export BUILTIN_CM_ARK_API_KEY=""
export BUILTIN_CM_ARK_MODEL=""
export BUILTIN_CM_ARK_BASE_URL=""# type deepseek
export BUILTIN_CM_DEEPSEEK_BASE_URL=""
export BUILTIN_CM_DEEPSEEK_API_KEY=""
export BUILTIN_CM_DEEPSEEK_MODEL=""# type ollama
export BUILTIN_CM_OLLAMA_BASE_URL=""
export BUILTIN_CM_OLLAMA_MODEL=""# type qwen
export BUILTIN_CM_QWEN_BASE_URL=""
export BUILTIN_CM_QWEN_API_KEY=""
export BUILTIN_CM_QWEN_MODEL=""# type gemini
export BUILTIN_CM_GEMINI_BACKEND=""
export BUILTIN_CM_GEMINI_API_KEY=""
export BUILTIN_CM_GEMINI_PROJECT=""
export BUILTIN_CM_GEMINI_LOCATION=""
export BUILTIN_CM_GEMINI_BASE_URL=""
export BUILTIN_CM_GEMINI_MODEL=""# Workflow Code Runner Configuration
# Supported code runner types: sandbox / local
# Default using local
# - sandbox: execute python code in a sandboxed env with deno + pyodide
# - local: using venv, no env isolation
export CODE_RUNNER_TYPE="local"
# Sandbox sub configuration
# Access restricted to specific environment variables, split with comma, e.g. "PATH,USERNAME"
export CODE_RUNNER_ALLOW_ENV=""
# Read access restricted to specific paths, split with comma, e.g. "/tmp,./data"
export CODE_RUNNER_ALLOW_READ=""
# Write access restricted to specific paths, split with comma, e.g. "/tmp,./data"
export CODE_RUNNER_ALLOW_WRITE=""
# Subprocess execution restricted to specific commands, split with comma, e.g. "python,git"
export CODE_RUNNER_ALLOW_RUN=""
# Network access restricted to specific domains/IPs, split with comma, e.g. "api.test.com,api.test.org:8080"
# The following CDN supports downloading the packages required for pyodide to run Python code. Sandbox may not work properly if removed.
export CODE_RUNNER_ALLOW_NET="cdn.jsdelivr.net"
# Foreign Function Interface access to specific libraries, split with comma, e.g. "/usr/lib/libm.so"
export CODE_RUNNER_ALLOW_FFI=""
# Directory for deno modules, default using pwd. e.g. "/tmp/path/node_modules"
export CODE_RUNNER_NODE_MODULES_DIR=""
# Code execution timeout, default 60 seconds. e.g. "2.56"
export CODE_RUNNER_TIMEOUT_SECONDS=""
# Code execution memory limit, default 100MB. e.g. "256"
export CODE_RUNNER_MEMORY_LIMIT_MB=""# The function of registration controller
# If you want to disable the registration feature, set DISABLE_USER_REGISTRATION to true. You can then control allowed registrations via a whitelist with ALLOW_REGISTRATION_EMAIL.
export DISABLE_USER_REGISTRATION="" # default "", if you want to disable, set to true
export ALLOW_REGISTRATION_EMAIL=""  #  is a list of email addresses, separated by ",". Example: "11@example.com,22@example.com"# Plugin AES secret.
# PLUGIN_AES_AUTH_SECRET is the secret of used to encrypt plugin authorization payload.
# The size of secret must be 16, 24 or 32 bytes.
export PLUGIN_AES_AUTH_SECRET='^*6x3hdu2nc%-p38'
# PLUGIN_AES_STATE_SECRET is the secret of used to encrypt oauth state.
# The size of secret must be 16, 24 or 32 bytes.
export PLUGIN_AES_STATE_SECRET='osj^kfhsd*(z!sno'
# PLUGIN_AES_OAUTH_TOKEN_SECRET is the secret of used to encrypt oauth refresh token and access token.
# The size of secret must be 16, 24 or 32 bytes.
export PLUGIN_AES_OAUTH_TOKEN_SECRET='cn+$PJ(HhJ[5d*z9'

查看docker-compose.yml文件:

详见:https://github.com/coze-dev/coze-studio/blob/main/docker/docker-compose.yml
name: coze-studio
x-env-file: &env_file- .envservices:mysql:image: mysql:8.4.5container_name: coze-mysqlrestart: alwaysenvironment:MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD:-root}MYSQL_DATABASE: ${MYSQL_DATABASE:-opencoze}MYSQL_USER: ${MYSQL_USER:-coze}MYSQL_PASSWORD: ${MYSQL_PASSWORD:-coze123}env_file: *env_fileports:- '3306'volumes:- ./data/mysql:/var/lib/mysql- ./volumes/mysql/schema.sql:/docker-entrypoint-initdb.d/init.sqlcommand:- --character-set-server=utf8mb4- --collation-server=utf8mb4_unicode_cihealthcheck:test:['CMD','mysqladmin','ping','-h','localhost','-u$${MYSQL_USER}','-p$${MYSQL_PASSWORD}',]interval: 10stimeout: 5sretries: 5start_period: 30snetworks:- coze-networkredis:image: bitnami/redis:8.0container_name: coze-redisrestart: alwaysuser: rootprivileged: trueenv_file: *env_fileenvironment:- REDIS_AOF_ENABLED=${REDIS_AOF_ENABLED:-no}- REDIS_PORT_NUMBER=${REDIS_PORT_NUMBER:-6379}- REDIS_IO_THREADS=${REDIS_IO_THREADS:-4}- ALLOW_EMPTY_PASSWORD=${ALLOW_EMPTY_PASSWORD:-yes}ports:- '6379'volumes:- ./data/bitnami/redis:/bitnami/redis/data:rw,Zcommand: >bash -c "/opt/bitnami/scripts/redis/setup.sh# Set proper permissions for data directorieschown -R redis:redis /bitnami/redis/datachmod g+s /bitnami/redis/dataexec /opt/bitnami/scripts/redis/entrypoint.sh /opt/bitnami/scripts/redis/run.sh"healthcheck:test: ['CMD', 'redis-cli', 'ping']interval: 5stimeout: 10sretries: 10start_period: 10snetworks:- coze-networkelasticsearch:image: bitnami/elasticsearch:8.18.0container_name: coze-elasticsearchrestart: alwaysuser: rootprivileged: trueenv_file: *env_fileenvironment:- TEST=1# Add Java certificate trust configuration# - ES_JAVA_OPTS=-Djdk.tls.client.protocols=TLSv1.2 -Dhttps.protocols=TLSv1.2 -Djavax.net.ssl.trustAll=true -Xms4096m -Xmx4096mports:- '9200'volumes:- ./data/bitnami/elasticsearch:/bitnami/elasticsearch/data- ./volumes/elasticsearch/elasticsearch.yml:/opt/bitnami/elasticsearch/config/my_elasticsearch.yml- ./volumes/elasticsearch/analysis-smartcn.zip:/opt/bitnami/elasticsearch/analysis-smartcn.zip:rw,Z- ./volumes/elasticsearch/setup_es.sh:/setup_es.sh- ./volumes/elasticsearch/es_index_schema:/es_index_schemashealthcheck:test:['CMD-SHELL','curl -f http://localhost:9200 && [ -f /tmp/es_plugins_ready ] && [ -f /tmp/es_init_complete ]',]interval: 5stimeout: 10sretries: 10start_period: 10snetworks:- coze-network# Install smartcn analyzer plugin and initialize EScommand: >bash -c "/opt/bitnami/scripts/elasticsearch/setup.sh# Set proper permissions for data directorieschown -R elasticsearch:elasticsearch /bitnami/elasticsearch/datachmod g+s /bitnami/elasticsearch/data# Create plugin directorymkdir -p /bitnami/elasticsearch/plugins;# Unzip plugin to plugin directory and set correct permissionsecho 'Installing smartcn plugin...';if [ ! -d /opt/bitnami/elasticsearch/plugins/analysis-smartcn ]; then# Download plugin package locallyecho 'Copying smartcn plugin...';cp /opt/bitnami/elasticsearch/analysis-smartcn.zip /tmp/analysis-smartcn.zip elasticsearch-plugin install file:///tmp/analysis-smartcn.zipif [[ "$$?" != "0" ]]; thenecho 'Plugin installation failed, exiting operation';rm -rf /opt/bitnami/elasticsearch/plugins/analysis-smartcnexit 1;fi;rm -f /tmp/analysis-smartcn.zip;fi;# Create marker file indicating plugin installation successtouch /tmp/es_plugins_ready;echo 'Plugin installation successful, marker file created';# Start initialization script in background(echo 'Waiting for Elasticsearch to be ready...'until curl -s -f http://localhost:9200/_cat/health >/dev/null 2>&1; doecho 'Elasticsearch not ready, waiting...'sleep 2doneecho 'Elasticsearch is ready!'# Run ES initialization scriptecho 'Running Elasticsearch initialization...'sed 's/\r$$//' /setup_es.sh > /setup_es_fixed.shchmod +x /setup_es_fixed.sh/setup_es_fixed.sh --index-dir /es_index_schemas# Create marker file indicating initialization completiontouch /tmp/es_init_completeecho 'Elasticsearch initialization completed successfully!') &# Start Elasticsearchexec /opt/bitnami/scripts/elasticsearch/entrypoint.sh /opt/bitnami/scripts/elasticsearch/run.shecho -e "⏳ Adjusting Elasticsearch disk watermark settings...""minio:image: minio/minio:RELEASE.2025-06-13T11-33-47Z-cpuv1container_name: coze-miniouser: rootprivileged: truerestart: alwaysenv_file: *env_fileports:- '9000'- '9001'volumes:- ./data/minio:/data- ./volumes/minio/default_icon/:/default_icon- ./volumes/minio/official_plugin_icon/:/official_plugin_iconenvironment:MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minioadmin}MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minioadmin123}MINIO_DEFAULT_BUCKETS: ${MINIO_BUCKET:-opencoze},${MINIO_DEFAULT_BUCKETS:-milvus}entrypoint:- /bin/sh- -c- |# Run initialization in background(# Wait for MinIO to be readyuntil (/usr/bin/mc alias set localminio http://localhost:9000 $${MINIO_ROOT_USER} $${MINIO_ROOT_PASSWORD}) doecho "Waiting for MinIO to be ready..."sleep 1done# Create bucket and copy files/usr/bin/mc mb --ignore-existing localminio/$${STORAGE_BUCKET}/usr/bin/mc cp --recursive /default_icon/ localminio/$${STORAGE_BUCKET}/default_icon//usr/bin/mc cp --recursive /official_plugin_icon/ localminio/$${STORAGE_BUCKET}/official_plugin_icon/echo "MinIO initialization complete.") &# Start minio server in foregroundexec minio server /data --console-address ":9001"healthcheck:test:['CMD-SHELL','/usr/bin/mc alias set health_check http://localhost:9000 ${MINIO_ROOT_USER} ${MINIO_ROOT_PASSWORD} && /usr/bin/mc ready health_check',]interval: 30stimeout: 10sretries: 3start_period: 30snetworks:- coze-networketcd:image: bitnami/etcd:3.5container_name: coze-etcduser: rootrestart: alwaysprivileged: trueenv_file: *env_fileenvironment:- ETCD_AUTO_COMPACTION_MODE=revision- ETCD_AUTO_COMPACTION_RETENTION=1000- ETCD_QUOTA_BACKEND_BYTES=4294967296- ALLOW_NONE_AUTHENTICATION=yesports:- 2379:2379- 2380:2380volumes:- ./data/bitnami/etcd:/bitnami/etcd:rw,Z- ./volumes/etcd/etcd.conf.yml:/opt/bitnami/etcd/conf/etcd.conf.yml:ro,Zcommand: >bash -c "/opt/bitnami/scripts/etcd/setup.sh# Set proper permissions for data and config directorieschown -R etcd:etcd /bitnami/etcdchmod g+s /bitnami/etcdexec /opt/bitnami/scripts/etcd/entrypoint.sh /opt/bitnami/scripts/etcd/run.sh"healthcheck:test: ['CMD', 'etcdctl', 'endpoint', 'health']interval: 5stimeout: 10sretries: 10start_period: 10snetworks:- coze-networkmilvus:container_name: coze-milvusimage: milvusdb/milvus:v2.5.10user: rootprivileged: truerestart: alwaysenv_file: *env_filecommand: >bash -c "# Set proper permissions for data directorieschown -R root:root /var/lib/milvuschmod g+s /var/lib/milvusexec milvus run standalone"security_opt:- seccomp:unconfinedenvironment:ETCD_ENDPOINTS: etcd:2379MINIO_ADDRESS: minio:9000MINIO_BUCKET_NAME: ${MINIO_BUCKET:-milvus}MINIO_ACCESS_KEY_ID: ${MINIO_ROOT_USER:-minioadmin}MINIO_SECRET_ACCESS_KEY: ${MINIO_ROOT_PASSWORD:-minioadmin123}MINIO_USE_SSL: falseLOG_LEVEL: debugvolumes:- ./data/milvus:/var/lib/milvus:rw,Zhealthcheck:test: ['CMD', 'curl', '-f', 'http://localhost:9091/healthz']interval: 5stimeout: 10sretries: 10start_period: 10sports:- '19530'- '9091'depends_on:etcd:condition: service_healthyminio:condition: service_healthynetworks:- coze-networknsqlookupd:image: nsqio/nsq:v1.2.1container_name: coze-nsqlookupdcommand: /nsqlookupdrestart: alwaysports:- '4160'- '4161'networks:- coze-networkhealthcheck:test: ['CMD-SHELL', 'nsqlookupd --version']interval: 5stimeout: 10sretries: 10start_period: 10snsqd:image: nsqio/nsq:v1.2.1container_name: coze-nsqdcommand: /nsqd --lookupd-tcp-address=nsqlookupd:4160 --broadcast-address=nsqdrestart: alwaysports:- '4150'- '4151'depends_on:nsqlookupd:condition: service_healthynetworks:- coze-networkhealthcheck:test: ['CMD-SHELL', '/nsqd --version']interval: 5stimeout: 10sretries: 10start_period: 10snsqadmin:image: nsqio/nsq:v1.2.1container_name: coze-nsqadmincommand: /nsqadmin --lookupd-http-address=nsqlookupd:4161restart: alwaysports:- '4171'depends_on:nsqlookupd:condition: service_healthynetworks:- coze-networkcoze-server:# build:#   context: ../#   dockerfile: backend/Dockerfileimage: opencoze/opencoze:latestrestart: alwayscontainer_name: coze-serverenv_file: *env_file# environment:#   LISTEN_ADDR: 0.0.0.0:8888networks:- coze-networkports:- '8888'- '8889:8889'volumes:- .env:/app/.env- ../backend/conf:/app/resources/conf# - ../backend/static:/app/resources/staticdepends_on:mysql:condition: service_healthyredis:condition: service_healthyelasticsearch:condition: service_healthyminio:condition: service_healthymilvus:condition: service_healthycommand: ['/app/opencoze']coze-web:# build:#   context: ..#   dockerfile: frontend/Dockerfileimage: opencoze/web:latestcontainer_name: coze-webrestart: alwaysports:- "8888:80"# - "443:443"  # SSL port (uncomment if using SSL)volumes:- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro  # Main nginx config- ./nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf:ro  # Proxy config# - ./nginx/ssl:/etc/nginx/ssl:ro  # SSL certificates (uncomment if using SSL)depends_on:- coze-servernetworks:- coze-networknetworks:coze-network:driver: bridge

创建并启动容器:

docker-compose up -d

查看容器列表:

docker ps

停止并销毁容器:

docker-compose down

删除镜像:

docker rmi \mysql:8.4.5 \bitnami/redis:8.0 \bitnami/elasticsearch:8.18.0 \minio/minio:RELEASE.2025-06-13T11-33-47Z-cpuv1 \bitnami/etcd:3.5 \milvusdb/milvus:v2.5.10 \nsqio/nsq:v1.2.1 \opencoze/opencoze:latest \opencoze/web:latest

删除目录:

rm -rf ./data

3、浏览器访问

假设当前ip为192.168.186.128
浏览器访问:http://192.168.186.128:8888

输入邮箱、密码,选择登录 或 注册:

管理后台:

4、详见

https://www.coze.cn/
https://www.coze.cn/open/docs
https://www.coze.cn/opensource
https://github.com/coze-dev/coze-studio
https://github.com/coze-dev/coze-studio/blob/main/README.zh_CN.md
https://mp.weixin.qq.com/s/amNVehNZib1gwnJt37utxg

    本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
    如若转载,请注明出处:http://www.pswp.cn/bicheng/94537.shtml
    繁体地址,请注明出处:http://hk.pswp.cn/bicheng/94537.shtml
    英文地址,请注明出处:http://en.pswp.cn/bicheng/94537.shtml

    如若内容造成侵权/违法违规/事实不符,请联系英文站点网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

    相关文章

    深度剖析Spring AI源码(九):构建企业知识库,深入ETL与RAG实现

    深度剖析Spring AI源码&#xff08;九&#xff09;&#xff1a;构建企业知识库&#xff0c;深入ETL与RAG实现 “Data is the new oil, but like oil, it’s valuable only when refined.” —— 在AI时代&#xff0c;原始数据需要经过精心的ETL处理才能成为AI的"燃料"…

    C# 简单工厂模式:构建灵活与可扩展的面向对象程序

    在面向对象编程&#xff08;OOP&#xff09;的世界中&#xff0c;简单工厂模式&#xff08;Simple Factory Pattern&#xff09; 是一种非常常见且实用的设计模式。虽然它并不属于GoF&#xff08;Gang of Four&#xff09;定义的23种经典设计模式之一&#xff0c;但它是理解更复…

    全面解析JVM预热:原理、价值与实践指南

    在Java应用的性能优化领域,“JVM预热”是一个常被提及却容易被忽视的关键环节。尤其是在高并发、低延迟的业务场景中,未经过充分预热的JVM可能导致应用启动初期出现响应延迟、吞吐量波动甚至服务不可用的问题。本文将从JVM预热的核心原理出发,深入剖析其价值、常见实现方案及…

    数学建模-灰色关联分析(GRA)

    目录 1-AI带你认识GRA &#x1f4d8; 一、灰色关联分析&#xff08;GRA&#xff09;简介 1. 什么是灰色关联分析&#xff1f; 2. 核心思想&#xff08;通俗理解&#xff09;&#xff1a; 3. 与熵权法的对比&#xff08;快速类比&#xff09;&#xff1a; &#x1f9e9; 二…

    Shell脚本-expect

    一、前言在 Linux 系统管理与自动化运维中&#xff0c;我们经常需要编写 Shell 脚本来完成各种任务。但有些命令&#xff08;如 ssh、scp、passwd、ftp 等&#xff09;在执行时会等待用户手动输入密码或确认信息&#xff0c;这就导致脚本无法完全自动化运行。为了解决这个问题&…

    Conmi的正确答案——Ubuntu24.04禁用任何休眠

    系统&#xff1a;Ubuntu 24.04步骤一、禁用系统休眠服务 # 禁用所有休眠/待机相关服务&#xff08;立即生效&#xff09; sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target # 验证状态&#xff08;显示 "masked" 即成功&am…

    开源 C++ QT Widget 开发(二)基本控件应用

    文章的目的为了记录使用C 进行QT Widget 开发学习的经历。临时学习&#xff0c;完成app的开发。开发流程和要点有些记忆模糊&#xff0c;赶紧记录&#xff0c;防止忘记。 相关链接&#xff1a; 开源 C QT Widget 开发&#xff08;一&#xff09;工程文件结构-CSDN博客 开源 C…

    今日科技风向|从AI芯片定制到阅兵高科技展示——聚焦技术前沿洞察

    今日科技风向&#xff5c;从AI芯片定制到阅兵高科技展示——聚焦技术前沿洞察 一、NVIDIA 开发“黑曜”子版 AI 芯片 B30A&#xff0c;瞄准中国市场 今日报道指出&#xff0c;NVIDIA 正在研发一款面向中国市场的定制芯片 B30A&#xff0c;基于其先进的 Blackwell 架构&#xff…

    Elasticsearch官方文档学习-未完待续

    Elasticsearch官方文档学习-未完待续说明快速开始基础知识索引组成1. 文档 (Documents)2. 元数据字段(Metadata fields)3. 映射和数据类型(Mappings and data types)文档操作(Document)批量创建或者删除文档 (Bulk index or delete documents)乐观并发控制 Optimistic concurre…

    Redis资料

    Redis是什么&#xff1f; Redis(Remote Dictionary Server)是一个开源的、基于内存的键值数据库&#xff0c;支持多种数据结构&#xff0c;可用作数据库、缓存和消息中间件。主要特点包括&#xff1a; 基于内存操作&#xff0c;读写性能极高支持持久化&#xff0c;可将内存数…

    CAMEL-Task2-Agent的构成组件

    CAMEL-Task2-Agent的构成组件 本文笔记主要关于2.7章节&#xff1a;CAMEL框架中agents如何使用工具。 一、工具说明 为什么要额外使用工具&#xff1f; agents本身的知识和能力有限&#xff0c;比如一些问题需要联网搜索才能准确回答&#xff08;而不是乱答&#xff0c;即“…

    数据整理自动化 - 让AI成为你的数据助手

    文章目录数据整理自动化 - 让AI成为你的数据助手引言&#xff1a;数据整理的时代挑战与机遇1. 常见数据整理场景分析1.1 数据整理的多元场景图谱1.2 数据质量问题的分类与影响1.3 传统处理方法的局限性2. AI与传统脚本的协同工作流2.1 智能数据整理架构设计2.2 协同工作流的最佳…

    react速成

    项目目录package.json文件&#xff1a;包含核心两个依赖&#xff08;react、react-dom&#xff09;和命令&#xff08;start、bulid&#xff09;src&#xff1a;源码目录&#xff0c;开始之用的到index.js和App.jsindex.js&#xff1a;是项目的入口&#xff0c;一切的运行起点/…

    Maven的进阶使用(上)

    pom.xml文件 就像 Make 的 MakeFile、Ant 的 build.xml 一样&#xff0c;Maven 项目的核心是 pom.xml。POM(全称 Project Object Model&#xff0c;项目对象模型 ) 定义了项目的基本信息&#xff0c;用于描述项目如何构建&#xff0c;声明项目依赖&#xff0c;等等。 Gredele--…

    【最后203篇系列】034 使用SQLite构建简单的任务管理

    表数据同步的断点续传 有时候需要将一个表的数据复制到另一个表&#xff0c;循环是常用的方式。当表比较大&#xff0c;执行的时间很长&#xff0c;会有很多因素引起失败。我希望可以比较简单的跑数&#xff0c;所以做一个简单的任务系统。 SQLitre是嵌入式数据库&#xff0c;这…

    SpringCloud Alibaba核心知识点

    Spring Cloud Alibaba 是阿里巴巴开源的一套微服务解决方案&#xff0c;与 Spring Cloud 生态深度集成。以下是其主要组件及其功能&#xff1a;Nacos服务注册与发现&#xff1a;支持动态服务注册、健康监测及DNS-Based服务发现。配置中心&#xff1a;提供分布式配置管理&#x…

    LeetCode 分类刷题:34. 在排序数组中查找元素的第一个和最后一个位置

    题目给你一个按照非递减顺序排列的整数数组 nums&#xff0c;和一个目标值 target。请你找出给定目标值在数组中的开始位置和结束位置。如果数组中不存在目标值 target&#xff0c;返回 [-1, -1]。你必须设计并实现时间复杂度为 O(log n) 的算法解决此问题。示例 1&#xff1a;…

    自建知识库,向量数据库 (十二)之 文章向量搜索——仙盟创梦IDE

    “未来之窗” 文章向量搜索&#xff1a;多领域应用与学习指南 在数字化浪潮中&#xff0c;“未来之窗” 文章向量搜索凭借其独特的技术优势&#xff0c;在酒店、电商、诊疗及知识库等多个领域展现出巨大的应用潜力&#xff0c;为各行业的信息处理与检索带来了全新的视角和高效…

    深度剖析:基于反射的.NET二进制序列化器设计与实现

    &#x1f50d; 深度剖析&#xff1a;基于反射的.NET二进制序列化器设计与实现本文将从底层原理到高级优化&#xff0c;全面剖析一个基于反射的.NET二进制序列化器的设计与实现&#xff0c;涵盖类型系统处理、内存布局、递归算法、性能优化等核心主题。1. 设计哲学与架构总览 1.…

    如何在 Ubuntu 上安装和配置 Samba ?

    Samba 是一个开源程序&#xff0c;用于文件共享和网络打印&#xff0c;使用 SMB 协议。现在基本上用于提供在 Windows 上可访问的 Linux 文件共享系统。 本文介绍如何在 Ubuntu 上安装和配置 Samba 服务器&#xff0c;以便跨文件夹共享网络上不同的计算机。 Update Your Syst…