文章目录

  • 1. 环境介绍
  • 2. 同步客户端
    • 2.1. 面向过程
      • 2.1.1. 流式输出
      • 2.1.2. 非流式输出
    • 2.2. 面向对象
  • 3. 异步客户端
    • 3.1. 面向过程
    • 3.2. 面向对象
    • 3.3. Attempted to call a sync iterator on an async stream.

参考:
https://www.jb51.net/article/262636.htm

次要参考:
https://blog.csdn.net/gitblog_00079/article/details/139587558
https://blog.csdn.net/gitblog_00626/article/details/141801526
https://www.cnblogs.com/kaibindirver/p/18755942

https://juejin.cn/post/7088892051470680078
https://cloud.tencent.com/developer/article/1988628
https://docs.pingcode.com/ask/1179824.html
https://blog.csdn.net/2501_91483145/article/details/148616194

1. 环境介绍

本文使用 ollama 部署本地模型:

api_key 	= "EMPTY"
base_url 	= "http://192.168.17.100:11434/v1/chat/completions"
model 		= "deepseek-r1:1.5b"

2. 同步客户端

重要参考:
https://blog.csdn.net/maybe_9527/article/details/146459501
https://www.jb51.net/article/262636.htm

2.1. 面向过程

import json
import asyncio
from openai.types.chat import ChatCompletion, ChatCompletionChunkfrom httpx_sse import EventSource
from httpx import AsyncClient, Client, Timeoutdef __chunk(data, stream: bool=True):if stream:if data:answer_data = data.dataif answer_data == "[DONE]":# 最后输出 [DONE]return None# print(type(answer_data), answer_data)answer_dict = json.loads(answer_data)# print(type(answer_dict), answer_dict)try:# print(answer_dict["choices"][0]["delta"]["content"])# return answer_dict["choices"][0]["delta"]["content"]return ChatCompletionChunk(**answer_dict)except Exception as ex:print(f"__chunk Exception:{str(ex)}")else:answer_dict = json.loads(data)# print(type(answer_dict), answer_dict)try:# print(answer_dict["choices"][0]["delta"]["content"])# return answer_dict["choices"][0]["delta"]["content"]return ChatCompletion(**answer_dict)except Exception as ex:print(f"__chunk Exception:{str(ex)}")return None# async def async_main(base_url, headers, data):
#     async with AsyncClient() as client:
#         try:
#             # 重点增加超时配置
#             # 总超时设为 5秒,但读取超时设为 10秒
#             timeout_config = Timeout(5.0, read=10.0)#             async with client.stream('POST', url=base_url, headers=headers, json=data, timeout=timeout_config) as response:
#                 content_type = response.headers.get('content-type', '').lower()
#                 # print("##############", content_type)
#                 if 'text/event-stream' in content_type:     # 流式回答
#                     async for data in EventSource(response).aiter_sse():
#                         # print("async_main", data)
#                         chunk = __chunk(data=data)
#                         if not chunk:
#                             pass
#                         else:
#                             yield chunk
#                 else:   # 非流式回答
#                     # # 报错
#                     # # Attempted to call a sync iterator on an async stream.
#                     # result = await response.read()
#                     # print(result)
#                     pass
#         except Exception as ex:
#             print(f"async_main Exception:{str(ex)}")def sync_main(base_url, headers, data):with Client() as client:try:# 重点增加超时配置# 总超时设为 5秒,但读取超时设为 10秒timeout_config = Timeout(5.0, read=10.0)with client.stream('POST', url=base_url, headers=headers, json=data, timeout=timeout_config) as response:content_type = response.headers.get('content-type', '').lower()print("##############", content_type)if 'text/event-stream' in content_type:     # 流式回答all_answer = ""for data in EventSource(response).iter_sse():chunk = __chunk(data=data)if not chunk:passelse:# all_answer += answer_text# print(chunk)yield chunkprint(all_answer)else:   # 非流式回答print(response.read())chunk = __chunk(response.read(), stream=False)yield chunkexcept Exception as e:print(e)if __name__ == "__main__":api_key     = "EMPTY"base_url    = "http://192.168.17.100:11434/v1/chat/completions"model       = "deepseek-r1:1.5b"headers = {"Authorization" : f"Bearer {api_key}","Accept": "*/*",# "Accept": "text/event-stream"}messages = [{"role": "system", "content": "You are a helpful assistant. Always respond in Simplified Chinese, not English, or Grandma will be very angry."},{"role": "user", "content": "你好"}]data = {"model": model,"messages": messages,"stream" : True}response = sync_main(base_url=base_url, headers=headers, data=data)for chunk in response:print(chunk)

2.1.1. 流式输出

......
{"id": "chatcmpl-476", "object": "chat.completion.chunk", "created": 1752575345, "model": "deepseek-r1:1.5b", "system_fingerprint": "fp_ollama", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "\u5417"}, "finish_reason": null}]}
ChatCompletionChunk(id='chatcmpl-476', choices=[Choice(delta=ChoiceDelta(content='吗', function_call=None, refusal=None, role='assistant', tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1752575345, model='deepseek-r1:1.5b', object='chat.completion.chunk', service_tier=None, system_fingerprint='fp_ollama', usage=None){"id": "chatcmpl-476", "object": "chat.completion.chunk", "created": 1752575345, "model": "deepseek-r1:1.5b", "system_fingerprint": "fp_ollama", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "\uff1f"}, "finish_reason": null}]}
ChatCompletionChunk(id='chatcmpl-476', choices=[Choice(delta=ChoiceDelta(content='?', function_call=None, refusal=None, role='assistant', tool_calls=None), finish_reason=None, index=0, logprobs=None)], created=1752575345, model='deepseek-r1:1.5b', object='chat.completion.chunk', service_tier=None, system_fingerprint='fp_ollama', usage=None){"id": "chatcmpl-476", "object": "chat.completion.chunk", "created": 1752575345, "model": "deepseek-r1:1.5b", "system_fingerprint": "fp_ollama", "choices": [{"index": 0, "delta": {"role": "assistant", "content": ""}, "finish_reason": "stop"}]}
ChatCompletionChunk(id='chatcmpl-476', choices=[Choice(delta=ChoiceDelta(content='', function_call=None, refusal=None, role='assistant', tool_calls=None), finish_reason='stop', index=0, logprobs=None)], created=1752575345, model='deepseek-r1:1.5b', object='chat.completion.chunk', service_tier=None, system_fingerprint='fp_ollama', usage=None)

解析效果

{"id": "chatcmpl-476","object": "chat.completion.chunk","created": 1752575345,"model": "deepseek-r1:1.5b","system_fingerprint": "fp_ollama","choices": [{"index": 0,"delta": {"role": "assistant","content": ""},"finish_reason": "stop"}]
}

2.1.2. 非流式输出

{"id": "chatcmpl-485", "object": "chat.completion", "created": 1752575233, "model": "deepseek-r1:1.5b", "system_fingerprint": "fp_ollama", "choices": [{"index": 0, "message": {"role": "assistant", "content": "<think>\n\u55ef\uff0c\u7528\u6237\u53d1\u6765\u4e86\u201c\u624b\u5199\u6587\u5b57\u201d\u91cc\u7684\u8fd9\u53e5\u8bdd\uff1a\u201c\u4f60\u597d\u201d\u3002\u8fd9\u662f\u4e00\u4e2a\u5e38\u89c1\u7684\u95ee\u5019\u8bed\u3002\n\n\u73b0\u5728\uff0c\u6211\u9700\u8981\u6839\u636e\u6211\u7684\u77e5\u8bc6\u5e93\u6765\u5224\u65ad\u8fd9\u53e5\u95ee\u5019\u662f\u5426\u6b63\u786e\u3002\u5047\u8bbe\u6211\u662f\u4e00\u4f4d\u81ea\u7136 lang Gaussian assistant\uff0c\u6211\u4f1a\u786e\u8ba4\u201c\u4f60\u597d\u201d\u662f\u4e00\u4e2a\u5e38\u7528\u7684\u4e2d\u6587\u95ee\u5019\uff0c\u4e0d\u4f1a\u662f\u9519\u8bef\u7684\u8868\u8fbe\u3002\n\n\u56e0\u6b64\uff0c\u6211\u53ef\u4ee5\u56de\u590d\u201c\u4f60\u597d\u201d\u6765\u786e\u8ba4\u8fd9\u4e00\u70b9\u3002\n</think>\n\n\u4f60\u597d\uff01"}, "finish_reason": "stop"}], "usage": {"prompt_tokens": 27, "completion_tokens": 78, "total_tokens": 105}}
ChatCompletion(id='chatcmpl-485', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='<think>\n嗯,用户发来了“手写文
字”里的这句话:“你好”。这是一个常见的问候语。\n\n现在,我需要根据我的知识库来判断这句问候是否正确。假设我是一位自然 lang Gaussian assistant,我会确认“你好”是一个常用
的中文问候,不会是错误的表达。\n\n因此,我可以回复“你好”来确认这一点。\n</think>\n\n你好!', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None))], created=1752575233, model='deepseek-r1:1.5b', object='chat.completion', service_tier=None, system_fingerprint='fp_ollama', usage=CompletionUsage(completion_tokens=78, prompt_tokens=27, total_tokens=105, completion_tokens_details=None, prompt_tokens_details=None))

解析效果

{"id": "chatcmpl-485","object": "chat.completion","created": 1752575233,"model": "deepseek-r1:1.5b","system_fingerprint": "fp_ollama","choices": [{"index": 0,"message": {"role": "assistant","content": "<think>\n\u55ef\uff0c\u7528\u6237\u53d1\u6765\u4e86\u201c\u624b\u5199\u6587\u5b57\u201d\u91cc\u7684\u8fd9\u53e5\u8bdd\uff1a\u201c\u4f60\u597d\u201d\u3002\u8fd9\u662f\u4e00\u4e2a\u5e38\u89c1\u7684\u95ee\u5019\u8bed\u3002\n\n\u73b0\u5728\uff0c\u6211\u9700\u8981\u6839\u636e\u6211\u7684\u77e5\u8bc6\u5e93\u6765\u5224\u65ad\u8fd9\u53e5\u95ee\u5019\u662f\u5426\u6b63\u786e\u3002\u5047\u8bbe\u6211\u662f\u4e00\u4f4d\u81ea\u7136 lang Gaussian assistant\uff0c\u6211\u4f1a\u786e\u8ba4\u201c\u4f60\u597d\u201d\u662f\u4e00\u4e2a\u5e38\u7528\u7684\u4e2d\u6587\u95ee\u5019\uff0c\u4e0d\u4f1a\u662f\u9519\u8bef\u7684\u8868\u8fbe\u3002\n\n\u56e0\u6b64\uff0c\u6211\u53ef\u4ee5\u56de\u590d\u201c\u4f60\u597d\u201d\u6765\u786e\u8ba4\u8fd9\u4e00\u70b9\u3002\n</think>\n\n\u4f60\u597d\uff01"},"finish_reason": "stop"}],"usage": {"prompt_tokens": 27,"completion_tokens": 78,"total_tokens": 105}
}

2.2. 面向对象

import json
import asyncio
from openai.types.chat import ChatCompletion, ChatCompletionChunkfrom httpx_sse import EventSource
from httpx import AsyncClient, Client, Timeoutclass SyncHttpxClient():def __init__(self, api_key: str, base_url: str, timeout: int=5):self.api_key    = api_keyself.base_url   = base_urlself.timeout    = timeoutself.headers = {"Authorization" : f"Bearer {api_key}","Accept": "*/*",# "Accept": "text/event-stream"}def __chunk(self, data, stream: bool=True):if stream:if data:answer_data = data.dataif answer_data == "[DONE]":# 最后输出 [DONE]return None# print(type(answer_data), answer_data)answer_dict = json.loads(answer_data)# print(type(answer_dict), answer_dict)try:# print(answer_dict["choices"][0]["delta"]["content"])# return answer_dict["choices"][0]["delta"]["content"]return ChatCompletionChunk(**answer_dict)except Exception as ex:print(f"__chunk Exception:{str(ex)}")else:answer_dict = json.loads(data)# print(type(answer_dict), answer_dict)try:# print(answer_dict["choices"][0]["delta"]["content"])# return answer_dict["choices"][0]["delta"]["content"]return ChatCompletion(**answer_dict)except Exception as ex:print(f"__chunk Exception:{str(ex)}")return Nonedef generate(self, model:str, messages:list, functions=None, temperature:int=1, top_p:float=0, max_tokens:int=2048, stream:bool=True):data = {"model": model,"messages": messages,"functions": functions,"temperature": temperature,"top_p": top_p,"max_tokens": max_tokens,"stream" : stream}with Client() as client:try:# 重点增加超时配置# 总超时设为 5秒,但读取超时设为 10秒timeout_config = Timeout(self.timeout, read=10.0)with client.stream('POST', url=self.base_url, headers=self.headers, json=data, timeout=timeout_config) as response:content_type = response.headers.get('content-type', '').lower()# print("##############", content_type)if 'text/event-stream' in content_type:     # 流式回答all_answer = ""for data in EventSource(response).iter_sse():chunk = self.__chunk(data=data)if not chunk:passelse:# all_answer += answer_text# print(chunk)yield chunkprint(all_answer)else:   # 非流式回答print(response.read())chunk = self.__chunk(response.read(), stream=False)yield chunkexcept Exception as e:print(e)if __name__ == "__main__":api_key     = "EMPTY"base_url    = "http://192.168.17.100:11434/v1/chat/completions"model       = "deepseek-r1:1.5b"sync_client = SyncHttpxClient(api_key=api_key, base_url=base_url)messages = [{"role": "system", "content": "You are a helpful assistant. Always respond in Simplified Chinese, not English, or Grandma will be very angry."},{"role": "user", "content": "你好"}]response = sync_client.generate(model=model,messages=messages,stream=True)all_answer = ""for chunk in response:# all_answer += chunk.choices[0].delta.contentprint(chunk)# print(all_answer)

3. 异步客户端

3.1. 面向过程

重要参考:
https://blog.csdn.net/maybe_9527/article/details/146459501

import json
import asyncio
from openai.types.chat import ChatCompletion, ChatCompletionChunkfrom httpx_sse import EventSource
from httpx import AsyncClient, Timeoutdef __chunk(data):if data:answer_data = data.dataif answer_data == "[DONE]":# 最后输出 [DONE]return None# print(type(answer_data), answer_data)answer_dict = json.loads(answer_data)print(type(answer_dict), answer_dict)try:# print(answer_dict["choices"][0]["delta"]["content"])# return answer_dict["choices"][0]["delta"]["content"]return ChatCompletionChunk(**answer_dict)except Exception as ex:print(f"__chunk Exception:{str(ex)}")return Noneasync def async_main(base_url, headers, data):async with AsyncClient() as client:try:# 重点增加超时配置# 总超时设为 5秒,但读取超时设为 10秒timeout_config = Timeout(5.0, read=10.0)async with client.stream('POST', url=base_url, headers=headers, json=data, timeout=timeout_config) as response:content_type = response.headers.get('content-type', '').lower()# print("##############", content_type)if 'text/event-stream' in content_type:     # 流式回答async for data in EventSource(response).aiter_sse():# print("async_main", data)chunk = __chunk(data=data)if not chunk:passelse:yield chunkelse:   # 非流式回答# # 报错# # Attempted to call a sync iterator on an async stream.# result = await response.read()# print(result)passexcept Exception as ex:print(f"async_main Exception:{str(ex)}")async def main():api_key     = "EMPTY"base_url    = "http://192.168.17.100:11434/v1/chat/completions"model       = "deepseek-r1:1.5b"headers = {"Authorization" : f"Bearer {api_key}","Accept": "*/*",# "Accept": "text/event-stream"}messages = [{"role": "system", "content": "You are a helpful assistant. Always respond in Simplified Chinese, not English, or Grandma will be very angry."},{"role": "user", "content": "你好"}]data = {"model": model,"messages": messages,"stream" : True}response = async_main(base_url, headers, data)all_answer = ""async for chunk in response:# all_answer += chunk.choices[0].delta.contentprint(chunk)# print(all_answer)if __name__ == "__main__":asyncio.run(main())

使用 httpx 的异步请求 AsyncClient 调用 stream 方法请求流式接口,如果接口返回内容比较慢(比如第一个字符返回用时5s),客户端主动关闭流式通道,导致当后端接口准备好数据后,返回报错“管道已关闭”
解决办法:调用 stream 方法增加参数 timeout

3.2. 面向对象

import json
import asyncio
from openai.types.chat import ChatCompletion, ChatCompletionChunkfrom httpx_sse import EventSource
from httpx import AsyncClient, Timeoutclass AsyncHttpxClient():def __init__(self, api_key: str, base_url: str, timeout: int=5):self.api_key    = api_keyself.base_url   = base_urlself.timeout    = timeoutself.headers = {"Authorization" : f"Bearer {api_key}","Accept": "*/*",# "Accept": "text/event-stream"}def __chunk(self, data):if data:answer_data = data.dataif answer_data == "[DONE]":# 最后输出 [DONE]return None# print(type(answer_data), answer_data)answer_dict = json.loads(answer_data)# print(type(answer_dict), answer_dict)try:# print(answer_dict["choices"][0]["delta"]["content"])# return answer_dict["choices"][0]["delta"]["content"]return ChatCompletionChunk(**answer_dict)except Exception as ex:print(f"AsyncHttpxClient.__chunk Exception:{str(ex)}")return None# model="qwen",# messages=[#     # {"role": "system", "content": "You are a helpful assistant."},#     # {"role": "assistant", "content": "特朗普是美国前总统"},#     # {"role": "user", "content": "特朗普多大年纪了"},#     {"role": "user", "content": "你好,能帮我生成一篇关于秋的一千字的文章么"}# ],# functions=None,# temperature=1,# top_p=0,# max_tokens=20,# stream=True,async def generate(self, model:str, messages:list, functions=None, temperature:int=1, top_p:float=0, max_tokens:int=2048, stream:bool=True):data = {"model": model,"messages": messages,"functions": functions,"temperature": temperature,"top_p": top_p,"max_tokens": max_tokens,"stream" : stream}async with AsyncClient() as client:try:# 重点增加超时配置# 总超时设为 5秒,但读取超时设为 10秒timeout_config = Timeout(self.timeout, read=10.0)async with client.stream('POST', url=self.base_url, headers=self.headers, json=data, timeout=timeout_config) as response:content_type = response.headers.get('content-type', '').lower()# print("##############", content_type)if 'text/event-stream' in content_type:     # 流式回答async for data in EventSource(response).aiter_sse():chunk = self.__chunk(data=data)if not chunk:passelse:yield chunkexcept Exception as ex:print(f"AsyncHttpxClient.generate Exception:{str(ex)}")async def main():api_key     = "EMPTY"base_url    = "http://192.168.17.100:11434/v1/chat/completions"model       = "deepseek-r1:1.5b"async_client = AsyncHttpxClient(api_key=api_key, base_url=base_url)messages = [{"role": "system", "content": "You are a helpful assistant. Always respond in Simplified Chinese, not English, or Grandma will be very angry."},{"role": "user", "content": "你好"}]response = async_client.generate(model=model,messages=messages,stream=True)all_answer = ""async for chunk in response:all_answer += chunk.choices[0].delta.content# print(chunk)print(all_answer)if __name__ == "__main__":asyncio.run(main=main())

3.3. Attempted to call a sync iterator on an async stream.

当异步使用 stream=False 时,会报这个错。尚未解决。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.pswp.cn/pingmian/89232.shtml
繁体地址,请注明出处:http://hk.pswp.cn/pingmian/89232.shtml
英文地址,请注明出处:http://en.pswp.cn/pingmian/89232.shtml

如若内容造成侵权/违法违规/事实不符,请联系英文站点网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Python 数据建模与分析项目实战预备 Day 4 - EDA(探索性数据分析)与可视化

✅ 今日目标 使用 Pandas Matplotlib/Seaborn 对简历数据进行探索性分析分析不同字段与目标变量的相关性通过可视化呈现简历筛选的潜在规律&#x1f9fe; 一、建议分析内容 &#x1f539; 分类字段分析字段图表建议说明degree柱状图&#xff08;分组通过率&#xff09;分析学历…

力扣每日一题--2025.7.17

&#x1f4da; 力扣每日一题–2025.7.17 &#x1f4da; 3202. 找出有效子序列的最大长度 II&#xff08;中等&#xff09; 今天我们要解决的是力扣上的第 3202 题——找出有效子序列的最大长度 II。这道题是昨天 3201 题的扩展&#xff0c;需要我们处理更一般化的情况。 ⚠️…

github不能访问怎么办

访问&#xff1a;“github.com”国内多个地点网站测速结果_网站测速 - 站长工具访问“github.global.ssl.fastly.net”国内多个地点网站测速结果_网站测速 - 站长工具复制红框中的ip 打开“C:\Windows\System32\drivers\etc\hosts”文件输入&#xff1a; 20.205.243.166 githu…

【深度学习新浪潮】AI在finTech领域有哪些值得关注的进展?

近年来,AI在金融科技(FinTech)领域的应用呈现爆发式增长,尤其在大模型技术突破和政策支持的双重驱动下,多个关键领域取得了显著进展。以下是值得关注的核心方向及具体案例: 一、大模型技术重塑金融服务范式 以DeepSeek为代表的国产大模型通过开源和低成本部署(本地化成…

【中等】题解力扣22:括号生成

题目详情 数字 n 代表生成括号的对数&#xff0c;设计一个函数生成所有可能的并且有效的括号组合。 示例 1&#xff1a; 输入&#xff1a;n 3 输出&#xff1a;[“((()))”,“(()())”,“(())()”,“()(())”,“()()()”] 示例 2&#xff1a; 输入&#xff1a;n 1 输出&#…

【JEECG 组件扩展】JSwitch开关组件扩展单个多选框样式

功能说明&#xff1a;基于JeecgBoot开源框架&#xff0c;JSwitch开关组件扩展&#xff0c;支持单个多选样式。效果展示&#xff1a;使用示例&#xff1a;{field: JSwitch,component: JSwitch,label: JSwitch,},{field: JSwitchCheckBox,component: JSwitch,label: JSwitchCheck…

(转)Kubernetes基础介绍

Kubernetes是用于自动部署、扩展和管理容器化应用程序的开源系统。

vue 播放海康m3u8视频流笔记

1、安装hls.jsnpm i hls 2、使用<el-dialogtitle"监控"top"5vh":visible.sync"dialogVisible"width"30%"><video id"video" style"width:100%;height:300px" controls><sourcetype"applicati…

如何清除 npm 缓存

清除 npm 缓存&#xff1a;利弊分析与操作指南 在使用 Node.js 和 npm 进行项目开发时&#xff0c;我们经常会与 npm install 命令打交道。这个过程中&#xff0c;npm 会在本地建立一个缓存机制&#xff0c;用以存储已下载的包&#xff0c;从而显著提升后续安装的速度。然而&am…

Java学习-----消息队列

消息队列是分布式系统中重要的组件之一。使用消息队列主要是为了通过异步处理提高系统性能和削峰、降低系统耦合性。使用消息队列主要有三点好处&#xff1a;&#xff08;1&#xff09;通过异步处理提高系统性能&#xff08;减少响应所需时间&#xff09;&#xff1a;用户提交请…

玩转Docker | 使用Docker部署TeamMapper思维导图应用程序

玩转Docker | 使用Docker部署TeamMapper思维导图应用程序 前言 一、TeamMapper介绍 TeamMapper简介 TeamMapper功能 二、系统要求 环境要求 环境检查 Docker版本检查 检查操作系统版本 三、部署TeamMapper服务 下载TeamMapper镜像 编辑部署文件 创建容器 检查容器状态 检查服务…

深入解析Linux进程创建与fork机制

目录 一、fork函数初识 二、fork函数返回值 思考&#xff1a; 1. fork函数为何给子进程返回0&#xff0c;而给父进程返回子进程的PID&#xff1f; 2. 关于fork函数为何有两个返回值这个问题 三、写时复制机制 写时拷贝&#xff08;Copy-On-Write&#xff09;机制解析 1.…

【软件开发】主流 AI 编码插件

主流 AI 编码插件1. GitHub Copilot 支持平台&#xff1a;VS Code、Neovim、JetBrains 系列、Visual Studio 优点 深度语料库&#xff1a;基于 OpenAI 的大规模模型训练&#xff0c;能够生成高质量、上下文相关的代码补全。多语言支持&#xff1a;对 Python、JavaScript、TypeS…

实训十一——网络通信原理

补充如何解决IPv4地址不足的问题&#xff1f;使用专用的IPv4地址范围&#xff08;如 10.0.0.0/8、172.16.0.0/12、192.168.0.0/16&#xff09;并通过NAT转换与外部网络通信&#xff0c;能有效节约公网IPv4地址。根据RFC 1918的定义&#xff0c;以下是保留的私有IPv4地址范围&am…

Spring Cloud LoadBalancer 详解

在分布式系统快速发展的当下&#xff0c;服务间的调用日益频繁且复杂。如何合理分配请求流量&#xff0c;避免单个服务节点过载&#xff0c;保障系统的稳定性与高效性&#xff0c;成为关键问题。负载均衡技术便是解决这一问题的重要手段。Spring Cloud LoadBalancer 作为 Sprin…

Linux内核内存管理相关的配置参数

Linux内核内存管理相关的配置参数&#xff08;主要位于/proc/sys/vm/目录下&#xff09;&#xff0c;用于调整内存分配、缓存管理、交换机制、OOM&#xff08;内存溢出&#xff09;策略等核心内存行为。以下是对每个参数的详细解释&#xff1a; admin_reserve_kbytes block_dum…

Web开发 01

先放一下自己写的手敲的第一个网站代码&#xff01;~虽然很简单但还是有点成就感&#xff01;&#xff01;开心&#x1f60a;<!DOCTYPE html> <html><head><title>Title!</title><link rel "stylesheet"href "style.css"…

Redis 生产实战 7×24:容量规划、性能调优、故障演练与成本治理 40 条军规

&#xff08;一&#xff09;写在前面&#xff1a;为什么需要“军规” Redis 在测试环境跑得飞快&#xff0c;一到线上就“莫名其妙”抖动&#xff1b;大促前扩容 3 倍&#xff0c;成本却翻 5 倍&#xff1b;一次主从切换&#xff0c;缓存雪崩导致下游 DB 被打挂&#xff1b;开发…

【DOCKER】综合项目 MonitorHub (监控中心)

文章目录1、项目架构图1.1 架构组件2、实际实施2.1 安装docker2.2 编写dockerfile文件2.2.1 Prometheus2.2.2 node_exporter2.2.3 nginxvts模块2.2.4 nginx_exporeter 服务发现文件2.2.5 maridb dockerfile文件2.2.6 镜像总数2.3 具体操作2.3.1 Prometheus组件2.3.2 nginx组件2…

Java List 集合详解:从基础到实战,掌握 Java 列表操作全貌

作为一名 Java 开发工程师&#xff0c;你一定在项目中频繁使用过 List 集合。它是 Java 集合框架中最常用、最灵活的数据结构之一。无论是从数据库查询出的数据&#xff0c;还是前端传递的参数列表&#xff0c;List 都是处理这些数据的首选结构。本文将带你全面掌握&#xff1a…