springboot 分片上传文件 - postgres(BLOB存储)

  • 方案一(推荐)

​ 接收完整文件,后端自动分片并存储(多线程 大文件)

	/*** 接收完整文件,后端自动分片并存储(多线程 大文件)* @param file* @return* @throws Exception*/public String uploadChunkFile(MultipartFile file) throws Exception {String uploadId = UUID.randomUUID().toString();long fileSize = file.getSize();long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);if (totalChunks <= 0) {return "文件大小异常,无法分片";}// 1. 创建临时目录存储分片(大文件避免内存溢出)File tempDir = Files.createTempDirectory("file-chunk-").toFile();//设置JVM退出时自动删除该目录tempDir.deleteOnExit();try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;int chunkIndex = 0;// 2. 先将所有分片写入临时文件(流式处理,不占大量内存)while ((bytesRead = inputStream.read(buffer)) != -1) {File chunkFile = new File(tempDir, uploadId + "-" + chunkIndex);try (FileOutputStream fos = new FileOutputStream(chunkFile)) {fos.write(buffer, 0, bytesRead); // 只写实际读取的字节}chunkIndex++;}// 3. 一次性提交所有分片任务,使用工具类等待完成ThreadPoolUtils.getNewInstance().submitBatchTasks((int) totalChunks, taskIndex -> {try {// 读取临时分片文件(每个任务只加载自己的分片数据)File chunkFile = new File(tempDir, uploadId + "-" + taskIndex);byte[] chunkData = Files.readAllBytes(chunkFile.toPath());// 存储到数据库FileUploadEntity entity = new FileUploadEntity();entity.setId(IdGenerator.nextId());entity.setUploadId(uploadId);entity.setChunkSize((long) chunkData.length);entity.setChunkNum(totalChunks);entity.setChunkFile(chunkData);entity.setChunkIndex(taskIndex);fileUploadMapper.insertFile(entity);} catch (IOException e) {throw new RuntimeException("分片" + taskIndex + "存储失败", e);}});} catch (Exception e) {log.error("文件分片上传失败", e);throw new RuntimeException("文件分片上传失败");} finally {// 4. 清理临时文件deleteDir(tempDir);}return "文件已成功分片存储,uploadId: " + uploadId;}// 递归删除临时目录private boolean deleteDir(File dir) {if (dir.isDirectory()) {File[] children = dir.listFiles();if (children != null) {for (File child : children) {deleteDir(child);}}}return dir.delete();}
  • 方案二

​ 接收完整文件,后端自动分片并存储(多线程 小文件)。。。 大文件可能会内存溢出

/*** 接收完整文件,后端自动分片并使用多线程存储 (多线程 小文件)* @param file* @return* @throws IOException, InterruptedException*/
public String uploadChunkFile(MultipartFile file) throws IOException, InterruptedException {// 生成唯一上传ID,用于标识同一文件的所有分片String uploadId = UUID.randomUUID().toString();String fileName = file.getOriginalFilename();long fileSize = file.getSize();// 计算总分片数long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);if (totalChunks <= 0) {return "文件大小异常,无法分片";}// 读取文件所有分片数据到内存(小文件适用,大文件建议使用磁盘临时文件)List<byte[]> chunkDataList = new ArrayList<>();try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;while ((bytesRead = inputStream.read(buffer)) != -1) {byte[] chunkData = new byte[bytesRead];System.arraycopy(buffer, 0, chunkData, 0, bytesRead);chunkDataList.add(chunkData);}}// 获取线程池工具类实例ThreadPoolUtils threadPool = ThreadPoolUtils.getNewInstance();// 提交批量分片任务并等待完成threadPool.submitBatchTasks((int) totalChunks, chunkIndex -> {byte[] currentChunkData = chunkDataList.get(chunkIndex);long currentChunkSize = currentChunkData.length;// 存储分片数据到数据库FileUploadEntity fileUpload = new FileUploadEntity();fileUpload.setId(IdGenerator.nextId());fileUpload.setUploadId(uploadId);fileUpload.setChunkSize(currentChunkSize);fileUpload.setChunkNum(totalChunks);fileUpload.setChunkFile(currentChunkData);fileUpload.setChunkIndex(chunkIndex);fileUploadMapper.insertFile(fileUpload);});return "文件已成功分片存储,uploadId: " + uploadId;
}
  • 方案三

​ 接收完整文件,后端自动分片并存储 (单线程) 。。。。上传大文件时间太久

    /*** 接收完整文件,后端自动分片并存储* @param file* @return* @throws IOException*/public String uploadChunkFileBackup(MultipartFile file) throws IOException {// 生成唯一上传ID,用于标识同一文件的所有分片String uploadId = UUID.randomUUID().toString();String fileName = file.getOriginalFilename();long fileSize = file.getSize();// 计算总分片数long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);List<FileUploadEntity> list = new ArrayList<>();try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;int chunkIndex = 0;// 循环读取文件并分片while ((bytesRead = inputStream.read(buffer)) != -1) {// 处理最后一个可能小于标准分片大小的分片byte[] chunkData = new byte[bytesRead];System.arraycopy(buffer, 0, chunkData, 0, bytesRead);// 获取分片实际大小(字节数)long chunkActualSize = bytesRead;  // 这就是当前分片的实际大小// 存储当前分片
//                saveChunk(uploadId, chunkIndex, totalChunks, chunkData, fileSize, fileName);// 存储当前分片FileUploadEntity fileUpload = new FileUploadEntity();fileUpload.setId(IdGenerator.nextId());fileUpload.setUploadId(uploadId);fileUpload.setChunkSize(chunkActualSize);fileUpload.setChunkNum(totalChunks);fileUpload.setChunkFile(chunkData);fileUpload.setChunkIndex(chunkIndex);fileUploadMapper.insertFile(fileUpload);
//                list.add(fileUpload);chunkIndex++;}}//批量添加
//        int batchSize = 500;
//        for (int i = 0; i < list.size(); i += batchSize) {
//            int end = Math.min(i + batchSize, list.size());
//            List<FileUploadEntity> subList = list.subList(i, end);
//            fileUploadMapper.batchInsert(subList);
//        }return "文件已成功分片存储,uploadId: " + uploadId;}
  • 方案四

    接收完整文件,后端自动分片并使用 (多线程)线程池未封装

/*** 接收完整文件,后端自动分片并使用 (多线程)线程池未封装* @param file* @return* @throws IOException, InterruptedException*/
//    @Overridepublic String uploadChunkFile(MultipartFile file) throws IOException, InterruptedException {// 生成唯一上传ID,用于标识同一文件的所有分片String uploadId = UUID.randomUUID().toString();String fileName = file.getOriginalFilename();long fileSize = file.getSize();// 计算总分片数long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);// 创建线程池,核心线程数可根据服务器配置调整// 通常设置为CPU核心数 * 2 + 1int corePoolSize = Runtime.getRuntime().availableProcessors() * 2 + 1;ExecutorService executorService = Executors.newFixedThreadPool(corePoolSize);// 使用CountDownLatch等待所有线程完成CountDownLatch countDownLatch = new CountDownLatch((int) totalChunks);try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;int chunkIndex = 0;// 循环读取文件并分片while ((bytesRead = inputStream.read(buffer)) != -1) {// 处理最后一个可能小于标准分片大小的分片byte[] chunkData = new byte[bytesRead];System.arraycopy(buffer, 0, chunkData, 0, bytesRead);long chunkActualSize = bytesRead;// 捕获当前变量的快照,避免线程安全问题final int currentChunkIndex = chunkIndex;final byte[] currentChunkData = chunkData;final long currentChunkSize = chunkActualSize;// 提交分片存储任务到线程池executorService.submit(() -> {try {FileUploadEntity fileUpload = new FileUploadEntity();fileUpload.setId(IdGenerator.nextId());fileUpload.setUploadId(uploadId);fileUpload.setChunkSize(currentChunkSize);fileUpload.setChunkNum(totalChunks);fileUpload.setChunkFile(currentChunkData);fileUpload.setChunkIndex(currentChunkIndex);fileUploadMapper.insertFile(fileUpload);} finally {// 无论是否发生异常,都减少计数器countDownLatch.countDown();}});chunkIndex++;}// 等待所有分片处理完成countDownLatch.await();} finally {// 关闭线程池executorService.shutdown();}return "文件已成功分片存储,uploadId: " + uploadId;}
  • 方案五

    大对象(Large Object)方案

 /*** 大对象(Large Object)方案** PostgreSQL 的大对象(Large Object)机制要求:* 二进制数据通过LargeObjectManager写入,返回一个OID(数字类型的对象 ID)* 表中只存储这个OID,而不是直接存储二进制数据* 读取时通过OID从大对象管理器中获取数据* @param file* @return*/@Overridepublic String uploadLargeObjectFile(MultipartFile file) {if (file.isEmpty()) {return "请选择文件";}try {long fileSize = file.getSize();String fileName = file.getOriginalFilename();long largeObjectId = postgresLargeObjectUtil.createLargeObject(file.getInputStream());FileUploadEntity fileUpload = new FileUploadEntity();fileUpload.setId(IdGenerator.nextId());fileUpload.setUploadId(String.valueOf(largeObjectId));fileUpload.setChunkSize(fileSize);fileUpload.setChunkNum(fileSize);fileUpload.setChunkFile(null);fileUpload.setChunkIndex(2);fileUploadMapper.insertLargeObjectFile(fileUpload);return "大文件上传成功!文件名:" + fileName + ",大小:" + fileSize + "字节";}catch (Exception e) {log.error("上传大文件失败:{}", e);return "上传失败:" + e.getMessage();}}//下载@Overridepublic void downloadFile(Long fileId, HttpServletResponse response) {FileUploadEntity fileEntity = fileUploadMapper.getFileById(fileId);long oid = Long.valueOf( fileEntity.getUploadId());try {response.reset();response.setContentType("application/octet-stream");String filename = "fileName.zip";response.addHeader("Content-Disposition", "attachment; filename=" + URLEncoder.encode(filename, "UTF-8"));ServletOutputStream outputStream = response.getOutputStream();postgresLargeObjectUtil.readLargeObject(oid, outputStream);}catch (Exception e) {log.error("下载文件失败:{}", e);}}
  • 方案六

    文件字节上传

/*** 文件字节上传* @param file* @return*/@Overridepublic String uploadFileByte(MultipartFile file) {if (file.isEmpty()) {return "请选择文件";}try {// 获取文件信息String fileName = file.getOriginalFilename();long fileSize = file.getSize();byte[] fileData = file.getBytes(); // 小文件直接获取字节数组// 执行插入(大文件建议用流:file.getInputStream())String sql = "INSERT INTO system_upload_test (id, upload_id, chunk_size, chunk_num, chunk_file, chunk_index) VALUES (?, ?, ?, ?, ?, ?)";jdbcTemplate.update(sql,111L,"2222",222L,3L,fileData,33L);return "文件上传成功!";} catch (Exception e) {e.printStackTrace();return "文件上传失败:" + e.getMessage();}}// 大文件用流:file.getInputStream()public String uploadBigFile(MultipartFile file) throws Exception {// 1. 定义 SQL(注意:字段顺序和占位符对应)String sql = "INSERT INTO user_qgcgk_app.system_upload_test " +"(id, upload_id, chunk_size, chunk_num, chunk_file, chunk_index) " +"VALUES (?, ?, ?, ?, ?, ?)";// 2. 准备参数(确保 InputStream 未关闭)Long id = 1795166209435262976L;String uploadId = "3333";Long chunkSize = 7068L;Long chunkNum = 7068L;InputStream chunkInputStream = file.getInputStream(); // 你的 InputStream(如 FileInputStream、ServletInputStream)Integer chunkIndex = 2;try {// 3. 执行 SQL:通过 PreparedStatementSetter 手动绑定参数jdbcTemplate.update(sql, new PreparedStatementSetter() {@Overridepublic void setValues(PreparedStatement ps) throws SQLException {// 绑定非流参数(按顺序,类型匹配)ps.setLong(1, id);                  // 第1个参数:id(Long)ps.setString(2, uploadId);          // 第2个参数:upload_id(String)ps.setLong(3, chunkSize);           // 第3个参数:chunk_size(Long)ps.setLong(4, chunkNum);            // 第4个参数:chunk_num(Long)// 关键:绑定 InputStream 到 bytea 字段(第5个参数)// 第三个参数传 -1 表示“未知流长度”,PostgreSQL 支持此模式ps.setBinaryStream(5, chunkInputStream, file.getSize());ps.setInt(6, chunkIndex);           // 第6个参数:chunk_index(Int)}});} finally {// 4. 执行完成后关闭流,释放资源if (chunkInputStream != null) {chunkInputStream.close();}}return "上传成功!";}
  • 方案七
    无临时文件+多线(减少IO操作)
 	/***   无临时文件+多线程+批量插入的分片上传*/public String uploadChunkFile(MultipartFile file) throws Exception {// 生成唯一上传IDString uploadId = UUID.randomUUID().toString();long fileSize = file.getSize();long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);if (totalChunks <= 0) {return "文件大小异常,无法分片";}try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;int chunkIndex = 0;// 批量插入缓冲区(每10个分片一批)List<FileUploadEntity> batchList = new ArrayList<>(10);// 计数器:等待所有批量任务完成CountDownLatch latch = new CountDownLatch((int) Math.ceil((double) totalChunks / 10));// 流式读取并处理分片while ((bytesRead = inputStream.read(buffer)) != -1) {// 复制当前分片数据(避免buffer被覆盖)byte[] chunkData = Arrays.copyOfRange(buffer, 0, bytesRead);// 创建分片实体FileUploadEntity entity = new FileUploadEntity();entity.setId(IdGenerator.nextId());entity.setUploadId(uploadId);entity.setChunkSize((long) chunkData.length);entity.setChunkNum(totalChunks);entity.setChunkFile(chunkData);entity.setChunkIndex(chunkIndex);batchList.add(entity);chunkIndex++;// 批量条件:满10个分片或最后一个分片if (batchList.size() >= 10 || chunkIndex == totalChunks) {// 复制当前批次(避免线程安全问题)List<FileUploadEntity> currentBatch = new ArrayList<>(batchList);// 提交批量插入任务ThreadPoolUtils.getNewInstance().executor(() -> {try {fileUploadMapper.batchInsert(currentBatch);} finally {latch.countDown(); // 任务完成,计数器减1}});batchList.clear(); // 清空缓冲区}}// 等待所有批量任务完成(最多等待5分钟)boolean allCompleted = latch.await(5, java.util.concurrent.TimeUnit.MINUTES);if (!allCompleted) {throw new BusinessException("文件分片上传超时,请重试");}} catch (Exception e) {log.error("文件分片上传失败,uploadId:{}", uploadId, e);// 失败时清理已上传的分片(可选)
//            fileUploadMapper.deleteByUploadId(uploadId);throw new BusinessException("文件分片上传失败:" + e.getMessage());}return "文件已成功分片存储,uploadId: " + uploadId;}/***   无临时文件+多线程+单条插入的分片上传*/public String uploadChunkFile(MultipartFile file) throws Exception {String uploadId = UUID.randomUUID().toString();long fileSize = file.getSize();long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);if (totalChunks <= 0) {return "文件大小异常,无法分片";}try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;int chunkIndex = 0;// 用于等待所有分片完成CountDownLatch latch = new CountDownLatch((int) totalChunks);// 边读取边提交分片任务,无需临时文件while ((bytesRead = inputStream.read(buffer)) != -1) {// 复制当前分片数据(避免buffer被下一次read覆盖)byte[] chunkData = Arrays.copyOfRange(buffer, 0, bytesRead);final int currentIndex = chunkIndex;// 提交异步任务ThreadPoolUtils.getNewInstance().executor(() -> {try {// 直接用内存中的分片数据写入数据库FileUploadEntity entity = new FileUploadEntity();entity.setId(IdGenerator.nextId());entity.setUploadId(uploadId);entity.setChunkSize((long) chunkData.length);entity.setChunkNum(totalChunks);entity.setChunkFile(chunkData);entity.setChunkIndex(currentIndex);fileUploadMapper.insertFile(entity);} catch (Exception e) {throw new RuntimeException("分片" + currentIndex + "存储失败", e);} finally {latch.countDown();}});chunkIndex++;}// 等待所有分片完成latch.await();} catch (Exception e) {log.error("文件分片上传失败", e);throw new BusinessException("文件分片上传失败");}return "文件已成功分片存储,uploadId: " + uploadId;}
- 工具类PostgreSQL大对象工具类```java
import lombok.extern.slf4j.Slf4j;
import org.postgresql.largeobject.LargeObject;
import org.postgresql.largeobject.LargeObjectManager;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;import java.io.InputStream;
import java.io.OutputStream;
import java.sql.Connection;
import java.sql.SQLException;/*** PostgreSQL大对象工具类* @author: zrf* @date: 2025/08/25 16:09*/
@Slf4j
@Component
public class PostgresLargeObjectUtil {private final JdbcTemplate jdbcTemplate;public PostgresLargeObjectUtil(JdbcTemplate jdbcTemplate) {this.jdbcTemplate = jdbcTemplate;}/*** 从输入流创建大对象并返回OID*/@Transactionalpublic long createLargeObject(InputStream inputStream) throws SQLException {// 获取数据库连接并关闭自动提交Connection connection = jdbcTemplate.getDataSource().getConnection();connection.setAutoCommit(false);try {// 获取PostgreSQL大对象管理器LargeObjectManager lobjManager = connection.unwrap(org.postgresql.PGConnection.class).getLargeObjectAPI();// 创建大对象,返回OIDlong oid = lobjManager.createLO(LargeObjectManager.READ | LargeObjectManager.WRITE);// 打开大对象并写入数据try (LargeObject largeObject = lobjManager.open(oid, LargeObjectManager.WRITE)) {OutputStream outputStream = largeObject.getOutputStream();byte[] buffer = new byte[8192];int bytesRead;while ((bytesRead = inputStream.read(buffer)) != -1) {outputStream.write(buffer, 0, bytesRead);}}connection.commit();return oid;} catch (Exception e) {connection.rollback();throw new SQLException("创建大对象失败", e);} finally {connection.close();}}/*** 根据OID读取大对象内容到输出流*/public void readLargeObject(long oid, OutputStream outputStream) throws Exception {Connection connection = jdbcTemplate.getDataSource().getConnection();connection.setAutoCommit(false);try {LargeObjectManager lobjManager = connection.unwrap(org.postgresql.PGConnection.class).getLargeObjectAPI();try (LargeObject largeObject = lobjManager.open(oid, LargeObjectManager.READ)) {InputStream inputStream = largeObject.getInputStream();byte[] buffer = new byte[8192];int bytesRead;while ((bytesRead = inputStream.read(buffer)) != -1) {outputStream.write(buffer, 0, bytesRead);}}connection.commit();} catch (Exception e) {log.error("读取大对象失败", e);} finally {connection.close();}}/*** 删除大对象(释放磁盘空间)*/@Transactionalpublic void deleteLargeObject(long oid) throws SQLException {Connection connection = jdbcTemplate.getDataSource().getConnection();connection.setAutoCommit(false);try {LargeObjectManager lobjManager = connection.unwrap(org.postgresql.PGConnection.class).getLargeObjectAPI();lobjManager.delete(oid);connection.commit();} catch (Exception e) {connection.rollback();throw new SQLException("删除大对象失败", e);} finally {connection.close();}}
}

线程池工具类


import java.util.List;
import java.util.concurrent.*;
import java.util.function.Consumer;/*** @Author:zrf* @Date:2023/8/14 10:05* @description:线程池工具类*/
public class ThreadPoolUtils {/*** 系统可用计算资源*/private static final int CPU_COUNT = Runtime.getRuntime().availableProcessors();/*** 核心线程数*/private static final int CORE_POOL_SIZE = Math.max(2, Math.min(CPU_COUNT - 1, 4));/*** 最大线程数*/private static final int MAXIMUM_POOL_SIZE = CPU_COUNT * 2 + 1;/*** 空闲线程存活时间*/private static final int KEEP_ALIVE_SECONDS = 30;/*** 工作队列*/private static final BlockingQueue<Runnable> POOL_WORK_QUEUE = new LinkedBlockingQueue<>(128);/*** 工厂模式*/private static final MyThreadFactory MY_THREAD_FACTORY = new MyThreadFactory();/*** 饱和策略*/private static final ThreadRejectedExecutionHandler THREAD_REJECTED_EXECUTION_HANDLER = new ThreadRejectedExecutionHandler.CallerRunsPolicy();/*** 线程池对象*/private static final ThreadPoolExecutor THREAD_POOL_EXECUTOR;/*** 声明式定义线程池工具类对象静态变量,在所有线程中同步*/private static volatile ThreadPoolUtils threadPoolUtils = null;/*** 初始化线程池静态代码块*/static {THREAD_POOL_EXECUTOR = new ThreadPoolExecutor(//核心线程数CORE_POOL_SIZE,//最大线程数MAXIMUM_POOL_SIZE,//空闲线程执行时间KEEP_ALIVE_SECONDS,//空闲线程执行时间单位TimeUnit.SECONDS,//工作队列(或阻塞队列)POOL_WORK_QUEUE,//工厂模式MY_THREAD_FACTORY,//饱和策略THREAD_REJECTED_EXECUTION_HANDLER);}/*** 线程池工具类空参构造方法*/private ThreadPoolUtils() {}/*** 获取线程池工具类实例* @return*/public static ThreadPoolUtils getNewInstance(){if (threadPoolUtils == null) {synchronized (ThreadPoolUtils.class) {if (threadPoolUtils == null) {threadPoolUtils = new ThreadPoolUtils();}}}return threadPoolUtils;}/*** 执行线程任务* @param runnable 任务线程*/public void executor(Runnable runnable) {THREAD_POOL_EXECUTOR.execute(runnable);}/*** 执行线程任务-有返回值* @param callable 任务线程*/public <T> Future<T> submit(Callable<T> callable) {return THREAD_POOL_EXECUTOR.submit(callable);}/*** 提交批量任务并等待所有任务完成* @param totalTasks 总任务数量* @param taskConsumer 任务消费者(接收任务索引,处理具体任务逻辑)* @throws InterruptedException 等待被中断时抛出*/public void submitBatchTasks(int totalTasks, Consumer<Integer> taskConsumer) throws InterruptedException {CountDownLatch countDownLatch = new CountDownLatch(totalTasks);for (int i = 0; i < totalTasks; i++) {final int taskIndex = i;// 使用现有线程池提交任务THREAD_POOL_EXECUTOR.submit(() -> {try {taskConsumer.accept(taskIndex); // 执行具体任务逻辑} finally {countDownLatch.countDown(); // 任务完成,计数器减1}});}countDownLatch.await(); // 等待所有任务完成}/*** 获取线程池状态* @return 返回线程池状态*/public boolean isShutDown(){return THREAD_POOL_EXECUTOR.isShutdown();}/*** 停止正在执行的线程任务* @return 返回等待执行的任务列表*/public List<Runnable> shutDownNow(){return THREAD_POOL_EXECUTOR.shutdownNow();}/*** 关闭线程池*/public void showDown(){THREAD_POOL_EXECUTOR.shutdown();}/*** 关闭线程池后判断所有任务是否都已完成* @return*/public boolean isTerminated(){return THREAD_POOL_EXECUTOR.isTerminated();}
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.pswp.cn/news/920279.shtml
繁体地址,请注明出处:http://hk.pswp.cn/news/920279.shtml
英文地址,请注明出处:http://en.pswp.cn/news/920279.shtml

如若内容造成侵权/违法违规/事实不符,请联系英文站点网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

AI应用--接口测试篇

1. 接口测试过程中的痛点接口的内容都是在yapi上&#xff0c;接口的内容都是以表格的形式呈现。在接口测试过程中&#xff0c;需要将表格形式的入参&#xff0c;手动敲成JSON格式&#xff0c;并且需要跟进字段类型&#xff0c;编辑字段值的形式。过程较为麻烦。使用postman进行…

Boris FX Samplitude Suite 2025.0.0 音频录制/编辑和母带处理

描述 Samplitude是一款专业的DAW&#xff0c;用于录音、编辑、混音和母带制作。通过基于对象的编辑和多轨录音&#xff0c;可以更快地进行创作。 原生杜比全景声 &#xff08;Dolby Atmos&#xff09; 支持 体验音频制作的新维度。由于集成了杜比全景声 &#xff08;Dolby Atm…

龙虎榜——20250827

上证指数今天放量下跌&#xff0c;收大阴线跌破5天均线&#xff0c;形成强势顶分型&#xff0c;日线转回调的概率很大。目前均线依然是多头排列&#xff0c;但是离60天均线较远&#xff0c;有回归均线的需求。深证指数今天放量收长上影的大阴线&#xff0c;日内高点12665.36&am…

项目智能家居---OrangePi全志H616

1 需求及项目准备 语音接入控制各类家电,如客厅灯、卧室灯、风扇。 Socket编程,实现Sockect发送指令远程控制各类家电。 烟雾警报监测, 实时检查是否存在煤气泄漏或者火灾警情,当存在警情时及时触发蜂鸣器报警及语音播报。 控制人脸识别打开房门功能,并语音播报识别成功或…

项目概要设计说明文档

一、 引言 &#xff08;一&#xff09; 编写目的 &#xff08;二&#xff09; 范围 &#xff08;三&#xff09; 文档约定 &#xff08;四&#xff09; 术语 二、 项目概要 &#xff08;一&#xff09; 建设背景 &#xff08;二&#xff09; 建设目标 &#xff08;三&a…

解决mac brew4.0安装速度慢的问题

Homebrew 4.0 版本的重大变化自 Homebrew 4.0 版本起&#xff0c;官方弃用了传统的 homebrew-core Git 仓库模式&#xff0c;改为通过 API&#xff08;formulae.brew.sh&#xff09; 获取软件包元数据。因此&#xff0c;手动替换 homebrew-core 仓库的目录可能不再存在。目录结…

AI需求优先级:数据价值密度×算法成熟度

3.3 需求优先级模型:ROI(数据价值密度算法成熟度) 核心公式: AI需求ROI = 数据价值密度 算法成熟度 总优先级 = ROI 伦理合规系数 (系数范围:合规=1.0,高风险=0~0.5) 一、数据价值密度:从数据垃圾到石油精炼 量化评估模型(融合3.1节数据可行性) 维度 评估指标…

手写MyBatis第37弹: 深入MyBatis MapperProxy:揭秘SQL命令类型与动态方法调用的完美适配

&#x1f942;(❁◡❁)您的点赞&#x1f44d;➕评论&#x1f4dd;➕收藏⭐是作者创作的最大动力&#x1f91e; &#x1f496;&#x1f4d5;&#x1f389;&#x1f525; 支持我&#xff1a;点赞&#x1f44d;收藏⭐️留言&#x1f4dd;欢迎留言讨论 &#x1f525;&#x1f525;&…

GD32VW553-IOT 测评和vscode开发环境搭建

GD32VW553-IOT 测评和vscode开发环境搭建 1. 背景介绍 iCEasy商城的产品, Firefly Workshop 萤火工厂的样片, 是一款基于GD32VW553 MCU的开源硬件, 这款MCU内置了32bit的RISC-V内核, 支持双模无线WIFI-6和BLE-5.2, 最高主频可达160Mhz. 本人曾在公司参与开发了一款基于RISC-V内…

斯塔克工业技术日志:用基础模型打造 “战甲级” 结构化 AI 功能

引子 在斯塔克工业的地下研发实验室里&#xff0c;弧光反应堆的蓝光映照着布满代码的显示屏&#xff0c;工程师詹姆斯・“罗迪”・罗德斯正对着一堆 AI 生成的杂乱食谱皱眉。 上周他刚搞定基础模型&#xff08;Foundation Models&#xff09;的文本生成&#xff0c;让 AI 能像…

如何解决pip安装报错ModuleNotFoundError: No module named ‘click’问题

【Python系列Bug修复PyCharm控制台pip install报错】如何解决pip安装报错ModuleNotFoundError: No module named ‘click’问题 摘要 在日常Python开发中&#xff0c;pip install 报错 ModuleNotFoundError: No module named click 是一个非常常见的问题&#xff0c;尤其是在…

PLC_博图系列☞基本指令”S_PULSE:分配脉冲定时器参数并启动“

PLC_博图系列☞基本指令”S_PULSE&#xff1a;分配脉冲定时器参数并启动“ 文章目录PLC_博图系列☞基本指令”S_PULSE&#xff1a;分配脉冲定时器参数并启动“背景介绍S_PULSE&#xff1a; 分配脉冲定时器参数并启动说明参数脉冲时序图示例关键字&#xff1a; PLC、 西门子、 …

【大模型】Qwen2.5-VL-3B模型量化以及运行测试,保留多模态能力(实践版)

目录 ■获取原始模型 ■构建llama.cpp ■转换模型到GGUF ▲视觉模块转换 ▲llm模块转换 ▲llm模块量化 ▲推理测试 ■报错处理 以下是几种多模态模型量化方案的简要对比: 特性 llama.cpp GGUF 量化

C语言 | 高级C语言面试题

侧重于内存管理、指针、编译器行为、底层原理和编程实践。 C语言面试 一、核心概念与深度指针题 1. `const` 关键字的深度理解 2. volatile 关键字的作用 3. 复杂声明解析 二、内存管理 4. `malloc(0)` 的行为 5. 结构体内存对齐与大小计算 三、高级技巧与底层原理 6. setjmp()…

【deepseek问答记录】:chatGPT的参数数量和上下文长度有关系吗?

这是一个非常好的问题&#xff0c;它触及了大型语言模型设计的核心。 简单来说&#xff1a;参数数量和上下文长度在技术上是两个独立的概念&#xff0c;但在模型的设计、训练和实际应用中&#xff0c;它们存在着深刻且重要的联系。 我们可以从以下几个层面来理解它们的关系&…

5GNR CSI反馈 TypeI码本

5GNR CSI反馈 TypeI码本 前言 最近孬孬在学习5gnr中的CSI反馈内容&#xff0c;对于目前的5GNR主要是基于码本的隐式反馈机制&#xff0c;在NR中主要是分为 TypeI 和 TypeII&#xff0c;对于TypeI是用于常规精度的&#xff0c;对于TypeII更为复杂&#xff0c;更多的适用于多用户…

使用appium对安卓(使用夜神模拟器)运行自动化测试

环境安装 基本环境安装 安装node.js 下载地址&#xff1a;Node.js — Run JavaScript Everywhere 安装Java JDK 下载地址&#xff1a;JDK Builds from Oracle 安装夜神模拟器 360上找下就能装&#xff0c;安装好后将夜神的bin目录&#xff0c;添加到系统变量的path中。 …

用wp_trim_words函数实现WordPress截断部分内容并保持英文单词完整性

在WordPress中&#xff0c;wp_trim_words函数用于截断字符串并限制单词数量。如果你希望在截断时保持单词的完整性&#xff08;让单词显示全&#xff09;&#xff0c;可以通过自定义函数来实现。 以下是一个示例代码&#xff0c;展示如何修改你的代码以确保截断时显示完整的单…

Codeforces Round 1042 (Div. 3) G Wafu! 题解

Codeforces Round 1042 (Div. 3) G Wafu! 题解 题意&#xff1a;每一次操作删除集合中最小的元素 x&#xff0c;并产生新的 x - 1 个元素值分别为 1 2 3 … x - 1 放入集合之中。 每次操作一个数 x 可以使得最终答案乘上 x&#xff0c;问我们操作 k 次在模 1e9 7 的基础上最终…

APP与WEB测试的区别?

web与app核心区别&#xff1a;一个基于浏览器 &#xff0c;一个基于操作系统这是所有区别的根源&#xff1a;Web测试&#xff1a;测试对象是网站&#xff0c;通过浏览器(Chrome,Firefox等)访问&#xff0c;运行环境核心是浏览器引擎&#xff1b;App测试&#xff1a;测试对象是应…