Spring Boot ShardingJDBC分库分表(草稿)

news/2025/2/22 0:58:48

ShardingJDBC分库分表

1.Maven 引用

		<dependency>
			<groupId>org.apache.shardingsphere</groupId>
			<artifactId>sharding-jdbc-spring-boot-starter</artifactId>
			<version>4.1.1</version>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-jpa</artifactId>
		</dependency>
		<dependency>
			<groupId>mysql</groupId>
			<artifactId>mysql-connector-java</artifactId>
		</dependency>

2.数据库和表格

数据库
*****_ch
*****_hk
*****_us
*****_olap

表格
kline
kline_D_0
kline_D_1
.......
kline_D_15

kline
kline_M_0
kline_M_1
.......
kline_M_15

kline_m1
kline_m1_250121
.......
kline_m1_2501221

kline_M5_0
.......
kline_M5_15


kline_M30_0
.......
kline_M30_15


kline_M60_0
.......
kline_M60_15

kline_W_0
.......
kline_W_15

kline_Y_0
.......
kline_Y_15


trade_record_240101

trade_record_250213_0
........
trade_record_250221_249
CREATE DEFINER=`admin`@`%` PROCEDURE `CreateKlineTables`()
BEGIN
    DECLARE i INT DEFAULT 0;
    DECLARE j INT DEFAULT 0;
    DECLARE table_name VARCHAR(64);
    DECLARE date_parts TEXT;
    DECLARE date_part VARCHAR(10);

    -- 定义时间周期数组
    SET date_parts = 'M5,M30,M60,D,W,M,Y';
    -- 循环遍历时间周期
    WHILE j < LENGTH(date_parts) - LENGTH(REPLACE(date_parts, ',', '')) + 1 DO
            SET date_part = SUBSTRING_INDEX(SUBSTRING_INDEX(date_parts, ',', j + 1), ',', -1);

            -- 循环创建表
            SET i = 0;
            WHILE i < 16 DO
                    SET table_name = CONCAT('kline_', date_part, '_', i);

                    SET @sql = CONCAT('
                CREATE TABLE IF NOT EXISTS ', table_name, ' LIKE kline');


                    PREPARE stmt FROM @sql;
                    EXECUTE stmt;
                    DEALLOCATE PREPARE stmt;

                    SET i = i + 1;
                END WHILE;

            SET j = j + 1;
        END WHILE;
END

CREATE DEFINER=`admin`@`%` PROCEDURE `CreateTradeRecordTables`(IN date_part VARCHAR(10))
BEGIN
    DECLARE i INT DEFAULT 0;
    DECLARE table_name VARCHAR(64);

    -- 循环创建表
    WHILE i < 250 DO
        SET table_name = CONCAT('trade_record_', date_part, '_', i);
        
        SET @sql = CONCAT('
            CREATE TABLE IF NOT EXISTS ', table_name, ' like trade_record_240101');
            
        PREPARE stmt FROM @sql;
        EXECUTE stmt;
        DEALLOCATE PREPARE stmt;

        SET i = i + 1;
    END WHILE;
END
CREATE DEFINER=`admin`@`%` PROCEDURE `DropTradeRecordTables`(IN date_part VARCHAR(10))
BEGIN
    DECLARE i INT DEFAULT 0;
    DECLARE table_name VARCHAR(64);

    -- 循环删除表
    WHILE i < 250 DO
        SET table_name = CONCAT('trade_record_', date_part, '_', i);

        SET @sql = CONCAT('DROP TABLE IF EXISTS ', table_name);
        
        PREPARE stmt FROM @sql;
        EXECUTE stmt;
        DEALLOCATE PREPARE stmt;

        SET i = i + 1;
    END WHILE;
END

2.application.yaml配置

  • 配置文件
spring:
  port: 8888
  tomcat:
    uri-encoding: UTF-8
    max-http-post-size: 20MB
  max-http-header-size: 20MB
  http:
    encoding:
      force: true
      charset: UTF-8
      enabled: true
  aop:
    auto: true
  main:
    allow-bean-definition-overriding: true

  jpa:
    database-platform: org.hibernate.dialect.MySQL5InnoDBDialect
    show-sql: false
    hibernate:
      ddl-auto: none
  dsx:
    olap:
      type: com.zaxxer.hikari.HikariDataSource
      driverClassName: com.mysql.cj.jdbc.Driver
      jdbcUrl: 
      username: 
      password: 
      hikari:
        maximum-pool-size: 20
        minimum-idle: 20
  shardingsphere:
    datasource:
      names: center, ds0, ds1, ds2
      center:
        type: com.zaxxer.hikari.HikariDataSource
        driverClassName: com.mysql.cj.jdbc.Driver
        jdbcUrl: 
        username: 
        password: 
        hikari:
          maximum-pool-size: 20
          minimum-idle: 20
      ds0:
        type: com.zaxxer.hikari.HikariDataSource
        driverClassName: com.mysql.cj.jdbc.Driver
        jdbcUrl: 
        username: 
        password: 
        hikari:
          maximum-pool-size: 20
          minimum-idle: 20
      ds1:
        type: com.zaxxer.hikari.HikariDataSource
        driverClassName: com.mysql.cj.jdbc.Driver
        jdbcUrl: 
        username: 
        password: 
        hikari:
          maximum-pool-size: 20
          minimum-idle: 20
      ds2:
        type: com.zaxxer.hikari.HikariDataSource
        driverClassName: com.mysql.cj.jdbc.Driver
        jdbcUrl: 
        username: 
        password: 
        hikari:
          maximum-pool-size: 20
          minimum-idle: 20
    props:
      sql:
        show: false
    sharding:
      default-data-source-name: center
      tables:
        trade_record:
          actual-data-nodes: ds$->{0..2}.trade_record_$->{0..10}
          database-strategy: 
            standard: 
              sharding-column: market_code
              precise-algorithm-class-name: com.zzc.sharding.DbShardingByMarketTypeAlgorithm
          table-strategy:
            complex:
              sharding-columns: trade_date,symbol_id
              algorithm-class-name: com.zzc.sharding.TableShardingByDateAndSymbolAlgorithm
        kline_m1:
          actual-data-nodes: ds$->{0..2}.kline_m1
          # actual-data-nodes: ds$->{0..1}
          database-strategy:
            standard:
              sharding-column: market_code
              precise-algorithm-class-name: com.zzc.sharding.DbShardingByMarketTypeAlgorithm
          table-strategy:
            complex:
              sharding-columns: trade_date
              algorithm-class-name: com.zzc.sharding.TableShardingByDateAlg
        kline:
          actual-data-nodes: ds$->{0..2}.kline_${['M5', 'M30','M60','D','W','M','Y']}_${0..15}
          # actual-data-nodes: ds$->{0..1}
          database-strategy:
            standard:
              sharding-column: market_code
              precise-algorithm-class-name: com.zzc.sharding.DbShardingByMarketTypeAlgorithm
          table-strategy:
            complex:
              sharding-columns: kline_type,symbol_id
              algorithm-class-name: com.zzc.sharding.TableShardingByKlineTypeAndSymbolIdAlg

  • 创建路由规则 DbShardingByMarketTypeAlgorithm
package com.zzc.sharding;


import java.util.Collection;

@Slf4j
public class DbShardingByMarketTypeAlgorithm implements PreciseShardingAlgorithm<String> {
    private DatabaseShardingConfig config;

    @Override
    public String doSharding(Collection<String> collection, PreciseShardingValue<String> preciseShardingValue) {
        // 从 sql 中获取 marketType
        String marketType = preciseShardingValue.getValue();
        if (config == null) {
            config = SpringContextUtil.getBean(DatabaseShardingConfig.class);
        }
        // 依据 marketType 获取配置的数据库
        String dbName = config.getDbName(marketType);
        if (!collection.contains(dbName)) {
            log.error("Database sharding error. column-value : [{}], DatabaseShardingConfig dbName : [{}], shardingsphere configs : [{}]", marketType, dbName, collection);
            throw new IllegalArgumentException("Database sharding error.");
        }
        return dbName;
    }
}

  • TableShardingByDateAndSymbolAlgorithm

package com.zzc.sharding;


@Slf4j
public class TableShardingByDateAndSymbolAlgorithm implements ComplexKeysShardingAlgorithm {
    private static final String FIELD_NAME_DATE = "trade_date";
    private static final String FIELD_NAME_SYMBOL = "symbol_id";
    private DatabaseShardingConfig config;

    @Override
    public Collection<String> doSharding(Collection collection, ComplexKeysShardingValue complexKeysShardingValue) {
        if (config == null) {
            config = SpringContextUtil.getBean(DatabaseShardingConfig.class);
        }
        // 从 sql 中获取成交日期 data 字段
        String date = ((List<String>) complexKeysShardingValue.getColumnNameAndShardingValuesMap().get(FIELD_NAME_DATE)).get(0);
        // 从 sql 中获取成交日期 symbol_id 字段
        Long symbolId = ((List<Long>) complexKeysShardingValue.getColumnNameAndShardingValuesMap().get(FIELD_NAME_SYMBOL)).get(0);
        // 以逻辑表名 x + "241118_1" 类似字符串为实际表名,返回最终的表名
        String logicTable = complexKeysShardingValue.getLogicTableName();
        DatabaseShardingConfig.TableShardingConfig shardingConfig = config.getTableShardingConfig(logicTable);
        return Collections.singletonList(logicTable + "_" + date.substring(2).replaceAll("-", "") + "_" + symbolId % shardingConfig.getTableShardingNum());
    }
}

  • TableShardingByDateAlg
package com.zzc.sharding;


@Slf4j
public class TableShardingByDateAlg implements ComplexKeysShardingAlgorithm {
    @Override
    public Collection<String> doSharding(Collection collection, ComplexKeysShardingValue complexKeysShardingValue) {
        // 从 sql 中获取成交日期 trade_date 字段
        String date = ((List<String>) complexKeysShardingValue.getColumnNameAndShardingValuesMap().get("trade_date")).get(0);

        // 以逻辑表名 x + "_241118" 类似字符串为实际表名,返回最终的表名
        String logicTable = complexKeysShardingValue.getLogicTableName();
        return Collections.singletonList(logicTable
                + "_" + date.substring(2).replaceAll("-", ""));
    }
}

  • TableShardingByKlineTypeAndSymbolIdAlg
package com.zzc.sharding;


@Slf4j
public class TableShardingByKlineTypeAndSymbolIdAlg implements ComplexKeysShardingAlgorithm {

    private DatabaseShardingConfig config;
    @Override
    public Collection<String> doSharding(Collection collection, ComplexKeysShardingValue complexKeysShardingValue) {
        if (config == null) {
            config = SpringContextUtil.getBean(DatabaseShardingConfig.class);
        }
        String klineType = ((List<String>) complexKeysShardingValue.getColumnNameAndShardingValuesMap().get("kline_type")).get(0);
        Long symbolId = ((List<Long>) complexKeysShardingValue.getColumnNameAndShardingValuesMap().get("symbol_id")).get(0);
        String logicTable = complexKeysShardingValue.getLogicTableName();
        DatabaseShardingConfig.TableShardingConfig shardingConfig = config.getTableShardingConfig(logicTable);
        log.warn("symbolId:{}",symbolId);
        log.warn("klineType:{}",klineType);
        log.warn("shardingConfig:{}",shardingConfig);
        return Collections.singletonList(logicTable
                + "_" + klineType + "_" + symbolId % shardingConfig.getTableShardingNum());
    }
}

  • 定时任务创建表格
package com.zzc.service.schedule;



@Slf4j
@Component
@RequiredArgsConstructor
public class QuotationDataManagementJob {

    /** 获取锁等待时间 */
    private final static int LOCK_WAIT_SECONDS = 10;
    /** 获取锁后的锁的自动释放时间 */
    private final static int LOCK_LEASE_SECONDS = 30 * 60;
    /** 创建分表语句(使用模版表创建实际表) */
    private final static String SHARDING_TABLE_CREATE_SQL = "CREATE TABLE IF NOT EXISTS %s LIKE %s;";
    /** 删除分表语句(数据清理,防止 mysql 磁盘占用过大) */
    private final static String SHARDING_TABLE_CLEAR_SQL = "DROP TABLE IF EXISTS %s;";
    private final static String DS_SHARDING = "shardingDataSource";
    private final static String DS_OLAP = "olapDataSource";

    private final DatabaseShardingConfig dbShardingConfig;
    private final RedissonClient redissonClient;
    private final DataSource shardingDataSource;
    private final DataSource olapDataSource;

    /**
     * 每周五下午12点30分生成下一周的行情表
     */
    @Scheduled(cron = "0 30 12 ? * FRI")
    public void createShardingTableJob() {
        RLock lock = redissonClient.getLock(LOCK_CREATE_SHARDING_TABLE);
        RedisLockUtils.lockExecute(lock, LOCK_WAIT_SECONDS, LOCK_LEASE_SECONDS, TimeUnit.SECONDS, () -> {
            dbShardingConfig.getTables().forEach((tableName, config) -> {
                if(config.getRunCreateJob())
                    createShardingTable(tableName, config);
            });
            return null;
        });
        log.info("createShardingTable job done");
    }

    /**
     * 每天10点清理数据
     */
    @Scheduled(cron = "0 0 10 * * ?")
    public void clearShardingTableJob() {
        RLock lock = redissonClient.getLock(LOCK_CLEAR_SHARDING_TABLE);
        RedisLockUtils.lockExecute(lock, LOCK_WAIT_SECONDS, LOCK_LEASE_SECONDS, TimeUnit.SECONDS, () -> {
            dbShardingConfig.getTables().forEach((tableName, config) -> {
                clearShardingTable(tableName, config);
            });
            return null;
        });
        log.info("clearShardingTable job done");
    }

    private void createShardingTable(String tableName, DatabaseShardingConfig.TableShardingConfig config) {
        if (DS_OLAP.equals(config.getDs())) {
            try {
                Connection connection = olapDataSource.getConnection();
                List<String> nextWeekWorkDays = getNextWeekWorkDays();
                nextWeekWorkDays.forEach(day -> {
                    createShardingTable("olap", connection, tableName, day, config);
                });
            } catch (Throwable t) {
                log.error("createShardingTable error. db : [olap] tableName : [{}]", tableName, t);
            }
        } else {
            ((ShardingDataSource) shardingDataSource).getDataSourceMap().forEach((dbName, myDataSource) -> {
                if (dbName.equals(dbShardingConfig.getCenterDs())) {
                    // 中心库不生成相关表
                    return;
                }
                try {
                    Connection connection = myDataSource.getConnection();
                    List<String> nextWeekWorkDays = getNextWeekWorkDays();
                    nextWeekWorkDays.forEach(day -> {
                        createShardingTable(dbName, connection, tableName, day, config);
                    });
                } catch (Throwable t) {
                    log.error("createShardingTable error. db : [{}] tableName : [{}]", dbName, tableName, t);
                }
            });
        }
    }

    /**
     * 创建分表
     *
     * @param dbName     数据库名称
     * @param connection 数据库连接
     * @param tableName  表名称
     * @param day        工作日 - 预留给手动补数据使用
     */
    private void createShardingTable(String dbName, Connection connection, String tableName, String day, DatabaseShardingConfig.TableShardingConfig config) {
        DatabaseShardingConfig.TableShardingConfig tableShardingConfig = dbShardingConfig.getTableShardingConfig(tableName);
        if (config.getTableShardingNum() > 1) {
            for (int i = 0; i < tableShardingConfig.getTableShardingNum(); i++) {
                String realTableName = tableName + "_" + day.substring(2) + "_" + i;
                try {
                    String sql = String.format(SHARDING_TABLE_CREATE_SQL, realTableName, tableShardingConfig.getTemplateTable());
                    connection.createStatement().execute(sql);
                    log.info("createShardingTable success. db : [{}] tableName : [{}], realTableName : [{}], sql : [{}]", dbName, tableName, realTableName, sql);
                } catch (Throwable t) {
                    log.error("createShardingTable error. db : [{}] tableName : [{}], realTableName : [{}]", dbName, tableName, realTableName, t);
                }
            }
        } else {
            String realTableName = tableName + "_" + day.substring(2);
            try {
                String sql = String.format(SHARDING_TABLE_CREATE_SQL, realTableName, tableShardingConfig.getTemplateTable());
                connection.createStatement().execute(sql);
                log.info("createShardingTable success. db : [{}] tableName : [{}], realTableName : [{}], sql : [{}]", dbName, tableName, realTableName, sql);
            } catch (Throwable t) {
                log.error("createShardingTable error. db : [{}] tableName : [{}], realTableName : [{}]", dbName, tableName, realTableName, t);
            }
        }
    }

    /**
     * 获取下一周的全部工作日
     *
     * @return 下一周的工作日
     */
    private List<String> getNextWeekWorkDays() {
        LocalDate today = LocalDate.now();
        // 下周一
        LocalDate nextMonday = today.with(TemporalAdjusters.next(DayOfWeek.MONDAY));
        List<String> workDays = new ArrayList<>();
        DateTimeFormatter formatter = DateTimeFormatter.ofPattern(DateUtils.YYYYMMDD);
        for (int i = 0; i < 5; i++) {
            // 下周一到下周五
            LocalDate date = nextMonday.plusDays(i);
            workDays.add(date.format(formatter));
        }
        return workDays;
    }

    private void clearShardingTable(String tableName, DatabaseShardingConfig.TableShardingConfig config) {
        if (DS_OLAP.equals(config.getDs())) {
            try {
                Connection connection = olapDataSource.getConnection();
                List<String> nextWeekWorkDays = getToBeClearDays(tableName);
                nextWeekWorkDays.forEach(day -> {
                    clearShardingTable("olap", connection, tableName, day, config);
                });
            } catch (Throwable t) {
                log.error("clearShardingTable error. db : [olap] tableName : [{}]", tableName, t);
            }
        } else {
            ((ShardingDataSource) shardingDataSource).getDataSourceMap().forEach((dbName, myDataSource) -> {
                if (dbName.equals(dbShardingConfig.getCenterDs())) {
                    // 中心库不删除相关表
                    return;
                }
                try {
                    Connection connection = myDataSource.getConnection();
                    List<String> nextWeekWorkDays = getToBeClearDays(tableName);
                    nextWeekWorkDays.forEach(day -> {
                        clearShardingTable(dbName, connection, tableName, day, config);
                    });
                } catch (Throwable t) {
                    log.error("clearShardingTable error. db : [{}] tableName : [{}]", dbName, tableName, t);
                }
            });
        }
    }

    /**
     * 清理分表
     *
     * @param dbName     数据库名称
     * @param connection 数据库连接
     * @param tableName  表名称
     * @param day        工作日 - 预留给手动补数据使用
     */
    private void clearShardingTable(String dbName, Connection connection, String tableName, String day, DatabaseShardingConfig.TableShardingConfig config) {
        if (config.getTableShardingNum() > 1) {
            for (int i = 0; i < config.getTableShardingNum(); i++) {
                String realTableName = tableName + "_" + day.substring(2) + "_" + i;
                try {
                    String sql = String.format(SHARDING_TABLE_CLEAR_SQL, realTableName);
                    connection.createStatement().execute(sql);
                    log.info("clearShardingTable success. db : [{}] tableName : [{}], realTableName : [{}], sql : [{}]", dbName, tableName, realTableName, sql);
                } catch (Throwable t) {
                    log.error("clearShardingTable error. db : [{}] tableName : [{}], realTableName : [{}]", dbName, tableName, realTableName, t);
                }
            }
        } else {
            String realTableName = tableName + "_" + day.substring(2);
            try {
                String sql = String.format(SHARDING_TABLE_CLEAR_SQL, realTableName);
                connection.createStatement().execute(sql);
                log.info("clearShardingTable success. db : [{}] tableName : [{}], realTableName : [{}], sql : [{}]", dbName, tableName, realTableName, sql);
            } catch (Throwable t) {
                log.error("clearShardingTable error. db : [{}] tableName : [{}], realTableName : [{}]", dbName, tableName, realTableName, t);
            }
        }
    }

    /**
     * 获取待清理的表对应的日期
     *
     * @param tableName 逻辑表名称
     * @return 待清理的日期
     */
    private List<String> getToBeClearDays(String tableName) {
        List<String> days = new ArrayList<>();
        DatabaseShardingConfig.TableShardingConfig tableShardingConfig = dbShardingConfig.getTableShardingConfig(tableName);
        LocalDate today = LocalDate.now();
        LocalDate startDay = today.minusDays(tableShardingConfig.getClearOffset());
        LocalDate endDay = today.minusDays(tableShardingConfig.getKeepDays());
        DateTimeFormatter formatter = DateTimeFormatter.ofPattern(DateUtils.YYYYMMDD);
        for (LocalDate date = startDay; date.isBefore(endDay); date = date.plusDays(1)) {
            days.add(date.format(formatter));
        }
        return days;
    }

}

  • 配置
package com.zzc.service.config;

@Data
@Slf4j
@RefreshScope
@Configuration
@ConfigurationProperties(prefix = "refinitiv.api-service.db-sharding")
@PropertySource(value = "classpath:guda-refinitiv-api-db-sharding.yaml", factory = YamlPropertySourceFactory.class)
public class DatabaseShardingConfig {
    private String centerDs;
    private Map<String, TableShardingConfig> tables;
    private Map<String, String> marketConfigs;

    @Setter(AccessLevel.PRIVATE)
    private Map<String, String> dbMap;

    @PostConstruct
    public void init() {
        if (marketConfigs == null || marketConfigs.isEmpty()) {
            throw new RuntimeException("DatabaseShardingConfig error. configs is empty");
        }
        Map<String, String> tmp = new HashMap<>();
        marketConfigs.forEach((dbName, markets) -> {
            for (String market : markets.split(",")) {
                tmp.put(market.trim(), dbName);
            }
        });
        dbMap = tmp;
        log.info("DatabaseShardingConfig init success. config: [{}]", this);
    }

    /**
     * 根据市场类型获取对应的数据库名称
     *
     * @param market 市场类型(MarketCodeType 枚举的 name)
     * @return 数据库名称
     */
    public String getDbName(String market) {
        return dbMap.get(market);
    }

    /**
     * 根据表名获取对应的分库配置
     *
     * @param tableName 表名
     * @return 分库配置
     */
    public TableShardingConfig getTableShardingConfig(String tableName) {
        return tables.get(tableName);
    }

    @Data
    @NoArgsConstructor
    @AllArgsConstructor
    public static class TableShardingConfig {
        /** 模板表名 */
        private String templateTable;
        /** 分多少张表 */
        private int tableShardingNum;
        /** 数据保留天数 */
        private int keepDays;
        /** 从哪一天开始清理 */
        private int clearOffset;
        /** 数据库名称 */
        private String ds;
        /** 是否按日期分表 */
        private Boolean runCreateJob = true;
    }
}

refinitiv.api-service:
  db-sharding:
    centerDs: 'center'
    tables:
      trade_record:
        # 模版表
        templateTable: 'trade_record_240101'
        # 分多少张表
        tableShardingNum: 250
        # 数据保留天数
        keepDays: 7
        # 从哪一天开始清理(一直清理到 keepDays 为止)
        clearOffset: 15
        ds: 'shardingDataSource'
      olap_quotation_snapshot:
        # 模版表
        templateTable: 'olap_quotation_snapshot_240101'
        # 分多少张表
        tableShardingNum: 1
        # 数据保留天数
        keepDays: 30
        # 从哪一天开始清理(一直清理到 keepDays 为止)
        clearOffset: 40
        ds: 'olapDataSource'
      kline_m1:
        # 模版表
        templateTable: 'kline_m1'
        # 分多少张表
        tableShardingNum: 1
        # 数据保留天数
        keepDays: 30
        # 从哪一天开始清理(一直清理到 keepDays 为止)
        clearOffset: 40
        ds: 'shardingDataSource'
      kline:
        # 模版表
        templateTable: 'kline'
        # 分多少张表
        tableShardingNum: 16
        # 数据保留天数
        keepDays: 30
        # 从哪一天开始清理(一直清理到 keepDays 为止)
        clearOffset: 40
        ds: 'shardingDataSource'
        runCreateJob: false

    marketConfigs:
      # db0 存储 US, US_PINK, US_OPTION 相关数据
      # ds2: 'US, US_PINK, US_OPTION'
      ds1: 'HK, HK_WRNT, HK_BONDA, HK_TRUST'
      ds0: 'US, US_PINK, US_OPTION, SH, SZ, SZ_INDEX, SZ_FUND, SZ_GEM, US_ETF'

http://www.niftyadmin.cn/n/5861475.html

相关文章

Nginx稳定版最新1.26.2源码包安装【保姆级教学】

Nginx安装及配置 开源Nginx官网地址(https://nginx.org) Nginx源码包下载地址(https://nginx.org/en/download.html) Mainline version 主线版本 Stable version 稳定版本 Legacy versions 陈旧版本 下载Nginx源码文件 curl -O https://nginx.org/download/nginx-1.26.2.t…

机器视觉3D深度图颜色含义解析

在机器视觉中&#xff0c;3D深度图颜色变化通常表示以下含义&#xff1a; 1.深度信息变化 颜色深浅&#xff1a;颜色越深&#xff0c;物体越近&#xff1b;颜色越浅&#xff0c;物体越远。 颜色渐变&#xff1a;平滑的渐变表示深度连续变化&#xff0c;突变则表示深度不连续。 …

Linux-ubuntu系统移植之Uboot启动流程

Linux-ubuntu系统移植之Uboot启动流程 一&#xff0c;Uboot启动流程1.Uboot的两阶段1.1.第一阶段1.11.硬件初始化1.12.复制 U-Boot 到 RAM1.13.跳转到第二阶段 1.2.第二阶段1.21.C 语言环境初始化1.22. 硬件设备初始化1.23. 加载环境变量1.24. 显示启动信息1.25. 等待用户输入&…

基于vue和微信小程序的校园自助打印系统(springboot论文源码调试讲解)

第3章 系统设计 3.1系统功能结构设计 本系统的结构分为管理员和用户、店长。本系统的功能结构图如下图3.1所示&#xff1a; 图3.1系统功能结构图 3.2数据库设计 本系统为小程序类的预约平台&#xff0c;所以对信息的安全和稳定要求非常高。为了解决本问题&#xff0c;采用前端…

深入解析PHP反序列化漏洞:原理、利用与防护

目录 引言 什么是PHP反序列化&#xff1f; 序列化与反序列化 反序列化漏洞 PHP反序列化漏洞的原理 魔术方法 漏洞示例 PHP反序列化漏洞的利用场景 1. 文件读写 2. 远程代码执行&#xff08;RCE&#xff09; 3. 权限提升 实际案例分析 1. Typecho反序列化漏洞 2.…

Grafana 快速部署监控视图指南

目录 配置数据源 登录 Grafana 进入数据源配置页面 添加 Prometheus 数据源 配置 Prometheus 地址 添加更多 Prometheus 数据源 导入仪表盘 查找仪表盘模板 导入仪表盘 配置数据源 登录 Grafana 打开 Grafana 的 Web 界面&#xff0c;登录到你的 Grafana 实例。 进入…

淘宝/天猫店铺订单数据导出、销售报表设计与数据分析指南

在电商运营中&#xff0c;订单数据是店铺运营的核心资产之一。通过对订单数据的导出、整理和分析&#xff0c;商家可以更好地了解销售情况、优化运营策略、提升客户满意度&#xff0c;并制定科学的业务决策。本文将详细介绍淘宝/天猫店铺订单数据的导出方法、销售报表的设计思路…

【git】工作场景下的 工作区 <-> 暂存区<-> 本地仓库 命令实战 具体案例

&#x1f680; Git 工作区 → 暂存区 → 本地仓库 → 回退实战 Git 的核心流程是&#xff1a; &#x1f449; 工作区&#xff08;Working Directory&#xff09; → git add → 暂存区&#xff08;Staging Area&#xff09; → git commit → 本地仓库&#xff08;Local Repos…