1. Sharding-Jdbc介绍
官方文档: https://shardingsphere.apache.org/index_zh.html
sharingSphere 包括,使用时一定要多看看用户手册
Sharding-JDBC是ShardingSphere的第一个产品,也是ShardingSphere的前身。
- sharding-jdbc是一个分布式的关系型数据库中间件
- 客户端代理模式,不需要搭建服务器,只需要后端数据库即可,有个IDE就行了
- 定位于轻量级的Java框架,以jar的方式提供服务
- 可以理解为增强版的jdbc驱动
- 完全兼容主流的ORM框架,如Mybatis-plus
- 架构:
- sharding-jdbc提供了4种配置
- Java API
- yaml (层级深)
- properties
- spring命名空间
- 与MyCat的区别
- MyCat是服务端的代理模式,Sharding-Jdbc是客户端代理模式
- 实际开发中如果企业有DBA建议使用MyCat,都是开发人员建议使用sharding-jdbc
- MyCat不支持在一个库内进行水平分表,而sharding-jdbc支持在同一个数据库中进行水平分表
- 名词解释
- 逻辑表:物理表的合并表
- 真实表:存放数据的地方
- 数据节点:存储数据的MySQL节点
- 绑定表:相当于MyCat中的子表
- 广播表:相当于MyCat中的全局表
2. Sharding-Jdbc引入使用
# 0.首先在两个MySQL实例上分别创建数据库:shard_order
# 1.在两个数据库创建两个表order_info_1,order_info_2
CREATE TABLE `order_info_1` (
`id` int(11) NOT NULL,
`order_amount` decimal(10,2) DEFAULT NULL,
`order_status` int(255) DEFAULT NULL,
`user_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `order_info_2` (
`id` int(11) NOT NULL,
`order_amount` decimal(10,2) DEFAULT NULL,
`order_status` int(255) DEFAULT NULL,
`user_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
# 2.切分规则,按照id的奇偶数切分到两个数据库,在自己的数据库按照user_id进行表切分
新建一个SpringBoot项目
代码导入POM依赖
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>sharding-jdbc-spring-boot-starter</artifactId>
<version>4.0.0-RC2</version>
</dependency>
applicaiton.properties配置
# 给两个数据源命名
spring.shardingsphere.datasource.names=ds0,ds1
# 数据源链接ds0要和命名一致
spring.shardingsphere.datasource.ds0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds0.jdbcUrl=jdbc:mysql://39.103.163.215:3306/shard_order
spring.shardingsphere.datasource.ds0.username=gavin
spring.shardingsphere.datasource.ds0.password=123456
# 数据源链接ds1要和命名一致
spring.shardingsphere.datasource.ds1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds1.jdbcUrl=jdbc:mysql://39.101.221.95:3306/shard_order
spring.shardingsphere.datasource.ds1.username=gavin
spring.shardingsphere.datasource.ds1.password=123456
# 具体的分片规则,基于数据节点
spring.shardingsphere.sharding.tables.order_info.actual-data-nodes=ds$->{0..1}.order_info_$->{1..2}
# 分库的规则
spring.shardingsphere.sharding.tables.order_info.database-strategy.inline.sharding-column=id
spring.shardingsphere.sharding.tables.order_info.database-strategy.inline.algorithm-expression=ds$->{id % 2}
# 分表的规则
spring.shardingsphere.sharding.tables.order_info.table-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.tables.order_info.table-strategy.inline.algorithm-expression=order_info_$->{user_id % 2 + 1}
// 使用jdbcTemplate 测试代码
@SpringBootTest
class ShardingjdbcProjectApplicationTests {
@Autowired
JdbcTemplate jdbcTemplate;
@Test
void insertTest(){
String sql = "insert into order_info(id,order_amount,order_status,user_id) values(3,213.88,1,2)";
int i = jdbcTemplate.update(sql);
System.out.println("影响行数:"+i);
}
}
结合 mybatis-plus ORM 框架 测试代码
插入数据
@Autowired
OrderInfoService orderInfoService;
@Test
void contextLoads() {
OrderInfo orderInfo = new OrderInfo();
orderInfo.setId(2).setOrderAmount(BigDecimal.valueOf(300)).setOrderStatus(1).setUserId(1);
orderInfoService.save(orderInfo);
OrderInfo orderInfo2 = new OrderInfo();
orderInfo2.setId(3).setOrderAmount(BigDecimal.valueOf(213)).setOrderStatus(1).setUserId(2);
orderInfoService.save(orderInfo2);
}
报错,一个坑,
spring.shardingsphere.datasource.ds0.url=jdbc:mysql://39.103.163.215:3306/shard_order
# 修改为:
spring.shardingsphere.datasource.ds0.jdbcUrl=jdbc:mysql://39.103.163.215:3306/shard_order
运行成功后,发现两张订单表已分库分表
查询所有数据
// 使用上没有任何变化
@Test
void listAll() {
List<OrderInfo> list = orderInfoService.list();
list.forEach(System.out::println);
}
3. 配置广播表
相当于mycat的全局表(每个库的数据一样,就是全部数据),先在两个库上创建广播表province_info
应用场景:数据量不大,并且不希望数据分片的表,如配置表,省市区表
CREATE TABLE `province_info` (
`id` int(11) NOT NULL,
`name` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
在application.properties里增加配置
spring.shardingsphere.sharding.broadcast-tables=province_info
测试插入和查询的代码
@Test
void insertBroadcast(){
String sql = "insert into province_info(id,name) values(1,'beijing')";
int i = jdbcTemplate.update(sql);
System.out.println("******* 影响的结果:"+i);
}
@Test
void selectBroadcast(){
String sql = "select * from province_info";
List<Map<String,Object>> result = jdbcTemplate.queryForList(sql);
for (Map<String,Object> val: result) {
System.out.println("=========== "+val.get("id")+" ----- "+val.get("name"));
}
}
4. 配置绑定表
相当于mycat的主表子表管理,首先按照order_info的建表顺序创建order_item分别在两个库上建立order_item_1,order_item_2
CREATE TABLE `order_item_1` (
`id` int(11) DEFAULT NULL,
`product_name` varchar(255) DEFAULT NULL,
`order_id` int(11) DEFAULT NULL,
`user_id` int(11) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `order_item_2` (
`id` int(11) DEFAULT NULL,
`product_name` varchar(255) DEFAULT NULL,
`order_id` int(11) DEFAULT NULL,
`user_id` int(11) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
配置绑定表,将两个item表的分片逻辑和order_info保持一致
# 给两个数据源命名
spring.shardingsphere.datasource.names=ds0,ds1
# 数据源链接ds0要和命名一致
spring.shardingsphere.datasource.ds0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds0.jdbcUrl=jdbc:mysql://39.103.163.215:3306/shard_order
spring.shardingsphere.datasource.ds0.username=gavin
spring.shardingsphere.datasource.ds0.password=123456
# 数据源链接ds1要和命名一致
spring.shardingsphere.datasource.ds1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.ds1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.ds1.jdbcUrl=jdbc:mysql://39.101.221.95:3306/shard_order
spring.shardingsphere.datasource.ds1.username=gavin
spring.shardingsphere.datasource.ds1.password=123456
# 具体的分片规则,基于数据节点
spring.shardingsphere.sharding.tables.order_info.actual-data-nodes=ds$->{0..1}.order_info_$->{1..2}
# 分库的规则
spring.shardingsphere.sharding.tables.order_info.database-strategy.inline.sharding-column=id
spring.shardingsphere.sharding.tables.order_info.database-strategy.inline.algorithm-expression=ds$->{id % 2}
# 分表的规则
spring.shardingsphere.sharding.tables.order_info.table-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.tables.order_info.table-strategy.inline.algorithm-expression=order_info_$->{user_id % 2 + 1}
# 具体的分片规则,基于数据节点
spring.shardingsphere.sharding.tables.order_item.actual-data-nodes=ds$->{0..1}.order_item_$->{1..2}
# 分库的规则
spring.shardingsphere.sharding.tables.order_item.database-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.order_item.database-strategy.inline.algorithm-expression=ds$->{order_id % 2}
# 分表的规则
spring.shardingsphere.sharding.tables.order_item.table-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.tables.order_item.table-strategy.inline.algorithm-expression=order_item_$->{user_id % 2 + 1}
# 绑定表关系
spring.shardingsphere.sharding.binding-tables=order_info,order_item
# 广播表
spring.shardingsphere.sharding.broadcast-tables=province_info
作业:自己配置使用一个绑定表来进行插入数据和查询数据
// 绑定表插入数据
@Test
void insertOrder(){
OrderInfo orderInfo = new OrderInfo();
orderInfo.setId(4).setUserId(10).setOrderStatus(1).setOrderAmount(BigDecimal.valueOf(2499));
orderInfoService.save(orderInfo);
OrderItem orderItem = new OrderItem();
orderItem.setId(2).setOrderId(4).setProductName("易跑跑步机GTS6").setUserId(10);
orderItemService.save(orderItem);
}
5. 读写分离配置
首先配置properties的数据源,如果有主机配置就必须要有从机配置
# 指定主从的配置节点
spring.shardingsphere.datasource.names=master0,master0slave0,master1,master1slave0
# master0数据源链接配置
spring.shardingsphere.datasource.master0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.master0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.master0.jdbcUrl=jdbc:mysql://39.103.163.215:3306/shard_order
spring.shardingsphere.datasource.master0.username=gavin
spring.shardingsphere.datasource.master0.password=123456
# master0slave0数据源链接配置
spring.shardingsphere.datasource.master0slave0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.master0slave0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.master0slave0.jdbcUrl=jdbc:mysql://39.99.212.46:3306/shard_order
spring.shardingsphere.datasource.master0slave0.username=gavin
spring.shardingsphere.datasource.master0slave0.password=123456
# master1数据源链接配置
spring.shardingsphere.datasource.master1.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.master1.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.master1.jdbcUrl=jdbc:mysql://39.101.221.95:3306/shard_order
spring.shardingsphere.datasource.master1.username=gavin
spring.shardingsphere.datasource.master1.password=123456
# master1slave0数据源链接配置
spring.shardingsphere.datasource.master1slave0.type=com.zaxxer.hikari.HikariDataSource
spring.shardingsphere.datasource.master1slave0.driver-class-name=com.mysql.jdbc.Driver
spring.shardingsphere.datasource.master1slave0.jdbcUrl=jdbc:mysql://localhost:3306/shard_order
spring.shardingsphere.datasource.master1slave0.username=root
spring.shardingsphere.datasource.master1slave0.password=gavin
# 具体的分片规则,基于数据节点
spring.shardingsphere.sharding.tables.order_info.actual-data-nodes=ds$->{0..1}.order_info_$->{1..2}
# 分库的规则
spring.shardingsphere.sharding.tables.order_info.database-strategy.inline.sharding-column=id
spring.shardingsphere.sharding.tables.order_info.database-strategy.inline.algorithm-expression=ds$->{id % 2}
# 分表的规则
spring.shardingsphere.sharding.tables.order_info.table-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.tables.order_info.table-strategy.inline.algorithm-expression=order_info_$->{user_id % 2 + 1}
# 具体的分片规则,基于数据节点
spring.shardingsphere.sharding.tables.order_item.actual-data-nodes=ds$->{0..1}.order_item_$->{1..2}
# 分库的规则
spring.shardingsphere.sharding.tables.order_item.database-strategy.inline.sharding-column=order_id
spring.shardingsphere.sharding.tables.order_item.database-strategy.inline.algorithm-expression=ds$->{order_id % 2}
# 分表的规则
spring.shardingsphere.sharding.tables.order_item.table-strategy.inline.sharding-column=user_id
spring.shardingsphere.sharding.tables.order_item.table-strategy.inline.algorithm-expression=order_item_$->{user_id % 2 + 1}
# 绑定表关系
spring.shardingsphere.sharding.binding-tables=order_info,order_item
# 广播表
spring.shardingsphere.sharding.broadcast-tables=province_info
# 读写分离主从关系绑定
spring.shardingsphere.sharding.master-slave-rules.ds0.master-data-source-name=master0
spring.shardingsphere.sharding.master-slave-rules.ds0.slave-data-source-names=master0slave0
# 相当于mycat的schema.xml设置datahost的balance,从库负载均衡的规则,可选值:ROUND_ROBIN,RANDOM,这里是轮询从多个读节点上读取数据,从官网上看到是可以自定义负载均衡的规则的,springcloud的rinbon也支持自定义负载均衡的规则
spring.shardingsphere.sharding.master-slave-rules.ds0.load-balance-algorithm-type=round_robin
spring.shardingsphere.sharding.master-slave-rules.ds1.master-data-source-name=master1
spring.shardingsphere.sharding.master-slave-rules.ds1.slave-data-source-names=master1slave0
# 相当于mycat的schema.xml设置datahost的balance,从库负载均衡的规则,可选值:ROUND_ROBIN,RANDOM,这里是随机从多个读节点上读取数据
spring.shardingsphere.sharding.master-slave-rules.ds1.load-balance-algorithm-type=random
读写分离后,主库从库要自己做主从复制,sharding jdbc是不会帮你把数据写到从库的,mycat也一样的。
源代码:
思考题:
在sharding-jdbc里是否有双活的概念?是否像MyCat一样支持热切换,如果不支持,我们如何来解决这个问题?
答:sharding-jdbc没有双活的概念,不支持热切换,它没有mycat 的datahost下面配置多个writehost的概念,sharding-jdbc只是客户端数据源连接的高级管理应用。mysql搭建双主互备模式
6. 整合druid连接池注意
其实sharding-jdbc自己会创建数据连接池的,没必要使用druid连接池,所以你如果导入了依赖
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>1.1.23</version>
</dependency>
那么会使得sharding-jdbc创建数据源时与druid发送冲突,项目启动失败,如果要依然使用druid来监控数据库,那么需要做两步
第一步,去掉DruidDataSourceAutoConfigure
自动配置
-
不导入druid-spring-boot-starter包,使用
<dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.1.20</version> </dependency>
-
或者,启动类排除掉
DruidDataSourceAutoConfigure
自动配置@SpringBootApplication(exclude={DruidDataSourceAutoConfigure.class})
第二步,使用JPA做ORM框架,加入如下配置
@Configuration
@EnableConfigurationProperties(JpaProperties.class)
public class DataSourceConfiguration {
private final JpaProperties jpaProperties;
private final Environment environment;
public DataSourceConfiguration(JpaProperties jpaProperties, Environment environment) {
this.jpaProperties = jpaProperties;
this.environment = environment;
}
@Primary
@Bean
public DataSource dataSource() {
String prefix = "spring.shardingsphere.datasource.";
String each = this.getDataSourceNames(prefix).get(0);
try {
return this.getDataSource(prefix, each);
} catch (final ReflectiveOperationException ex) {
throw new ShardingSphereException("Can't find datasource type!", ex);
}
}
@Primary
@Bean
public EntityManagerFactory entityManagerFactory() {
HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
vendorAdapter.setDatabase(Database.MYSQL);
vendorAdapter.setGenerateDdl(true);
vendorAdapter.setShowSql(true);
LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
factory.setJpaVendorAdapter(vendorAdapter);
factory.setPersistenceUnitName("default");
factory.setPackagesToScan("com.lzm.*");
factory.setDataSource(this.dataSource());
factory.setJpaPropertyMap(this.jpaProperties.getProperties());
factory.afterPropertiesSet();
return factory.getObject();
}
@Bean
@Primary
public EntityManager entityManager(EntityManagerFactory entityManagerFactory) {
return SharedEntityManagerCreator.createSharedEntityManager(entityManagerFactory);
}
@Primary
@Bean
public PlatformTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager txManager = new JpaTransactionManager();
txManager.setEntityManagerFactory(entityManagerFactory);
return txManager;
}
private List<String> getDataSourceNames(final String prefix) {
StandardEnvironment standardEnv = (StandardEnvironment) this.environment;
standardEnv.setIgnoreUnresolvableNestedPlaceholders(true);
return null == standardEnv.getProperty(prefix + "name")
? new InlineExpressionParser(standardEnv.getProperty(prefix + "names")).splitAndEvaluate()
: Collections.singletonList(standardEnv.getProperty(prefix + "name"));
}
@SuppressWarnings("unchecked")
private DataSource getDataSource(final String prefix, final String dataSourceName) throws ReflectiveOperationException {
Map dataSourceProps = PropertyUtil.handle(this.environment, prefix + dataSourceName.trim(), Map.class);
Preconditions.checkState(!dataSourceProps.isEmpty(), "Wrong datasource properties!");
DataSource result = DataSourceUtil.getDataSource(dataSourceProps.get("type").toString(), dataSourceProps);
DataSourcePropertiesSetterHolder.getDataSourcePropertiesSetterByType(dataSourceProps.get("type").toString())
.ifPresent(dataSourcePropertiesSetter -> dataSourcePropertiesSetter.propertiesSet(this.environment, prefix, dataSourceName, result));
return result;
}
}
但我不喜欢JPA这个ORM框架,所以一般不用。
7. 分库分表配置实战
java配置方式
上面是在application.properties配置分库分表的,也可以使用自定义配置类的方式配置分库分表,参考官网的java配置用户手册:https://shardingsphere.apache.org/document/legacy/4.x/document/cn/manual/sharding-jdbc/configuration/config-java/
以rcc的项目举例,pom.xml导入的依赖
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>sharding-jdbc-orchestration</artifactId>
<version>4.1.1</version>
</dependency>
不是xxx-starter,自然就没有xxxAutoConfiguration自动配置类和xxxProperty配置属性类,所以要通过java配置类的方式实现sharding分库分表的设置,做法如下:
1、application.yaml配置多个物理数据源
spring:
profiles:
active: sit
application:
name: mcsp-rcc-service-xiejw17
main:
allow-bean-definition-overriding: true
lifecycle:
timeout-per-shutdown-phase: 40s
datasource:
rcc:
default-ds:
url: jdbc:mysql://10.16.156.63:3306/mcsp_rcc?Unicode=true&characterEncoding=UTF-8&serverTimezone=UTC&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true&autoReconnect=true
username: mcsp_rcc_user
password: tZe16dyfF
read0:
url: jdbc:mysql://10.16.156.63:3306/mcsp_rcc_00?Unicode=true&characterEncoding=UTF-8&serverTimezone=UTC&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true&autoReconnect=true
username: mcsp_rcc_user
password: tZe16dyfF
write0:
url: jdbc:mysql://10.16.156.63:3306/mcsp_rcc_00?Unicode=true&characterEncoding=UTF-8&serverTimezone=UTC&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true&autoReconnect=true
username: mcsp_rcc_user
password: tZe16dyfF
....
shard-count: 8
driver-class-name: com.mysql.jdbc.Driver
ivc:
default-ds:
url: jdbc:mysql://10.16.92.106:3306/mcsp_ivc?Unicode=true&characterEncoding=UTF-8&serverTimezone=UTC&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true&autoReconnect=true
username: mcsp_ivc_user
password: di2yh2wPA
read0:
url: jdbc:mysql://10.16.92.106:3306/mcsp_ivc_00?Unicode=true&characterEncoding=UTF-8&serverTimezone=UTC&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true&autoReconnect=true
username: mcsp_ivc_search
password: Nnlsa83Aj
write0:
url: jdbc:mysql://10.16.92.106:3306/mcsp_ivc_00?Unicode=true&characterEncoding=UTF-8&serverTimezone=UTC&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true&autoReconnect=true
username: mcsp_ivc_user
password: di2yh2wPA
....
shard-count: 2
driver-class-name: com.mysql.jdbc.Driver
.....
shard-count=8 表示分8个数据库
2、EnvironmentConfig.java,将物理数据源注册到spring容器
@Configuration
public class EnvironmentConfig implements BeanDefinitionRegistryPostProcessor, EnvironmentAware {
public static Environment environment;
private static final List<String> modules = new ArrayList<>();
static {
modules.add("ivc");
modules.add("rcc");
modules.add("mcc");
modules.add("smc");
}
@Override // 获取环境变量
public void setEnvironment(Environment env) {
environment = env;
}
@Override
public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry beanDefinitionRegistry) throws BeansException {
registerBeanDefinition(beanDefinitionRegistry);
}
@Override
public void postProcessBeanFactory(ConfigurableListableBeanFactory configurableListableBeanFactory) throws BeansException {
}
public void registerBeanDefinition( BeanDefinitionRegistry beanDefinitionRegistry){
for(String module : modules){
String shardParam = environment.getProperty(CommonConstants.PREFIX + module + ".shard-count");
int shardCount = Objects.nonNull(shardParam) ? Integer.parseInt(shardParam) : 2;
String driverClass = environment.getProperty(CommonConstants.PREFIX + module + ".driver-class-name");
for(int i=0;i<shardCount;i++){
String read = ".read" + i + ".";
String write = ".write" + i + ".";
String readUrl = environment.getProperty(CommonConstants.PREFIX + module + read + "url");
String readUser = environment.getProperty(CommonConstants.PREFIX + module + read + "username");
String readPassWord = environment.getProperty(CommonConstants.PREFIX + module + read + "password");
String writeUrl = environment.getProperty(CommonConstants.PREFIX + module + write + "url");
String writeUser = environment.getProperty(CommonConstants.PREFIX + module + write + "username");
String writePassWord = environment.getProperty(CommonConstants.PREFIX + module + write + "password");
RootBeanDefinition readBeanDefinition = new RootBeanDefinition(DataSource.class);
configBeanDefinition(readBeanDefinition,module + "-readDataSource" + i, readUser, readPassWord, readUrl,driverClass);
RootBeanDefinition writeBeanDefinition = new RootBeanDefinition(DataSource.class);
configBeanDefinition(writeBeanDefinition,module + "-writeDataSource" + i, writeUser, writePassWord, writeUrl,driverClass);
beanDefinitionRegistry.registerBeanDefinition(module + "-readDataSource" + i, readBeanDefinition);
beanDefinitionRegistry.registerBeanDefinition(module + "-writeDataSource" + i, writeBeanDefinition);
}
// 模块的默认库写入bean定义信息注册器
String defaultDsUrl = environment.getProperty(CommonConstants.PREFIX + module + ".default-ds.url");
String defaultDsUser = environment.getProperty(CommonConstants.PREFIX + module + ".default-ds.username");
String defaultDsPassWord = environment.getProperty(CommonConstants.PREFIX + module + ".default-ds.password");
RootBeanDefinition defaultDsBeanDefinition = new RootBeanDefinition(DataSource.class);
configBeanDefinition(defaultDsBeanDefinition,module + "-defaultDataSource", defaultDsUser, defaultDsPassWord, defaultDsUrl,driverClass);
beanDefinitionRegistry.registerBeanDefinition(module + "-defaultDataSource", defaultDsBeanDefinition);
}
}
private void configBeanDefinition(RootBeanDefinition beanDefinition, String name, String user, String passWord, String url, String driverClass){
/*beanDefinition.setSource("EnvironmentConfig");
Properties prop = build(url, user, passWord, driverClass);
beanDefinition.getPropertyValues().add("xaProperties", prop);
beanDefinition.getPropertyValues().add("minPoolSize",5);
beanDefinition.getPropertyValues().add("maxPoolSize",50);
beanDefinition.getPropertyValues().add("maxIdleTime",0);
beanDefinition.getPropertyValues().add("testQuery","select 1");
beanDefinition.getPropertyValues().add("xaDataSourceClassName", "com.alibaba.druid.pool.xa.DruidXADataSource");
beanDefinition.setRole(2);
beanDefinition.setBeanClass(AtomikosDataSourceBean.class);*/
// 这里使用XA两阶段协议解决分布式事务,使用atomikos充当TM角色,数据库充当RM角色,已实现XA接口协议,但mysql从5.0版本开始支持,但是性能不太好,本身XA就是同步的,对数据库资源锁的时间较长,不适合并发高和事务生命周期长的场景,容易造成获取数据库资源阻塞的问题,数据库的连接数是有限的,业务请求会堵塞在获取数据库连接,只适合并发低的单体应用。
MutablePropertyValues propertyValues = new MutablePropertyValues();
propertyValues.add("url", url).add("username", user).add("password", passWord)
.add("name",name)
.add("driverClassName", driverClass)
.add("initialSize",3)
.add("minIdle",3)
.add("maxActive",20)
.add("minEvictableIdleTimeMillis",1000*60*5) // 空闲连接最小5分钟后才关闭,默认30分钟
.add("maxEvictableIdleTimeMillis",1000L*60L*60L) // 空闲连接最大60分钟后才关闭,默认7小时
.add("validationQuery","select 1")
//.add("removeAbandoned",true) // 删除泄漏连接,为避免误删正在使用的连接,官网建议生产环境上关闭
.add("removeAbandonedTimeout",3600) // 60*60秒,从连接池获取的连接如果超过这个时间不归还到连接池,则被认为是泄漏连接,可以被删除回收释放资源
// 避免连接泄漏,如果业务超过这个时间还没有处理完,可以适当延长这个时间
.add("logAbandoned",true)
;
beanDefinition.setPropertyValues(propertyValues);
beanDefinition.setBeanClass(DruidDataSource.class);
}
private Properties build(String url,String user,String passWord, String driverClass) {
Properties prop = new Properties();
prop.put("url", url);
prop.put("username", user);
prop.put("password", passWord);
prop.put("driverClassName", driverClass);
return prop;
}
}
EnvironmentConfig实现接口BeanDefinitionRegistryPostProcessor
和EnvironmentAware
- 实现接口
BeanDefinitionRegistryPostProcessor
是为了重写方法postProcessBeanDefinitionRegistry
将数据源通过bean定义注册器BeanDefinitionRegistry
注册进spring容器 - 实现
EnvironmentAware
环境感知接口是为了通过环境变量读取application.yml配置文件上面配置的数据库信息
物理数据源都在注册进spring ioc容器后,接着就配置shardingsphere的逻辑数据源与分库分表策略
3、Shardingsphere的分片配置
ivc\mcc\rcc\smc 4个模块,每个模块都建一个xxxShardingJdbcConfig配置类:
以MccShardingJdbcConfig为例,其他模块配置类一样的
@Configuration
public class MccShardingJdbcConfig implements ApplicationContextAware {
private static final String MODULE = "mcc";
private static ApplicationContext applicationContext;
@Bean(name = "mccShardingDataSource")
public DataSource getShardingDataSource() throws SQLException {
// ds逻辑数据源绑定物理数据源做主从设置
Set<MasterSlaveRuleConfiguration> masterSlaveRuleConfigurationSet = EnvironmentUtil.fillDataSource(MODULE);
// 数据分片的规则配置类
ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
// 设置读写分离规则
shardingRuleConfig.setMasterSlaveRuleConfigs(masterSlaveRuleConfigurationSet);
// 添加需要进行分库分表的数据表的表分片规则配置对象
TableRuleConfiguration ivc_invoice_header = getTableRule("mcc_receipt_header", "ds_${0..7}.mcc_receipt_header_${2021..2026}${['06','07','08','09']}","id"); // actualDataNode实际物理表的表达式,会根据这个表达式生成所有的sharding认为的物理表名称,当你查询逻辑表时不带有这个表分片键 id ,就会去查询所有sharding生成的这些物理表,但是数据库没有建表就会报错
shardingRuleConfig.getTableRuleConfigs().add(ivc_invoice_header);
TableRuleConfiguration mcc_receipt_line = getTableRule("mcc_receipt_line", "ds_${0..7}.mcc_receipt_line_${2021..2026}${['06','07','08','09']}","id");
shardingRuleConfig.getTableRuleConfigs().add(mcc_receipt_line);
TableRuleConfiguration mcc_receipt_relation = getTableRule("mcc_receipt_relation", "ds_${0..7}.mcc_receipt_relation_${2021..2026}${['06','07','08','09']}","id");
shardingRuleConfig.getTableRuleConfigs().add(mcc_receipt_relation);
// 默认的分表策略: 使用不分片的实现类,取逻辑表
shardingRuleConfig.setDefaultTableShardingStrategyConfig(new NoneShardingStrategyConfiguration());
// 默认的分库策略:使用自定义的策略实现类,对租户编码取模,获取具体的分片数据库
ShardingStrategyConfiguration dataBaseConfig = new StandardShardingStrategyConfiguration("tenant_code",
new DataBaseShardingAlgorithm());
shardingRuleConfig.setDefaultDatabaseShardingStrategyConfig(dataBaseConfig);
shardingRuleConfig.setDefaultDataSourceName("mcc-defaultDataSource");
// 返回分片的数据源,使用bean注解放入spring容器
return ShardingDataSourceFactory.createDataSource(EnvironmentUtil.createDataSourceMap(MODULE, applicationContext), shardingRuleConfig, new Properties());
}
/**
* 获取表分片规则配置对象
* @param tableName 逻辑数据表名
* @param tableNodes 分片表节点,由数据源名 + 表名组成,以小数点分隔
* @param shardingColumn 分片列(键)
* @return
*/
private TableRuleConfiguration getTableRule(String tableName, String tableNodes, String shardingColumn) {
TableRuleConfiguration tableRuleConfig = new TableRuleConfiguration(tableName, tableNodes);
// 分表策略,使用单分片键的自定义精确分片算法
tableRuleConfig.setTableShardingStrategyConfig(new StandardShardingStrategyConfiguration(shardingColumn,
new SingleKeyModuloTableShardingAlgorithm()));
return tableRuleConfig;
}
@Override
public void setApplicationContext(ApplicationContext context) throws BeansException {
applicationContext = context;
}
实现接口ApplicationContextAware
应用上下文感知接口,是为了获取注入spring容器的物理数据源对象。表分片算法SingleKeyModuloTableShardingAlgorithm.java,实现精确分片算法接口PreciseShardingAlgorithm.java,适用于单分片键的标准分片场景,多分片键可以看看官网的ComplexShardingStrategyConfiguration
public class SingleKeyModuloTableShardingAlgorithm implements PreciseShardingAlgorithm<Long> {
@Override
public String doSharding(Collection<String> collection, PreciseShardingValue<Long> shardingValue) {
String tableName = shardingValue.getLogicTableName(); // 逻辑表名
// 取分片列值的前6位
Long key = Long.valueOf(String.valueOf(shardingValue.getValue()).substring(0, 6));
return tableName.concat("_").concat(String.valueOf(key)); // 返回具体分片表名
}
}
表分片算法NormalTableShardingAlgorithm.java,实现精确分片算法接口PreciseShardingAlgorithm.java
public class NormalTableShardingAlgorithm implements PreciseShardingAlgorithm<Long> {
@Override
public String doSharding(Collection<String> collection, PreciseShardingValue<Long> shardingValue) {
return shardingValue.getLogicTableName(); // 直接返回逻辑表名
}
}
分库算法DataBaseShardingAlgorithm.java,实现精确分片算法接口PreciseShardingAlgorithm.java
public class DataBaseShardingAlgorithm implements PreciseShardingAlgorithm<String> {
@Override
public String doSharding(Collection<String> collection, PreciseShardingValue<String> shardingValue) {
String value = shardingValue.getValue();
int shard = value.hashCode()%collection.size();
Iterator<String> iterator = collection.iterator();
int temp = 0;
String ds = null;
while(iterator.hasNext()){
ds = iterator.next();
if(shard == temp){
break;
}
temp ++;
}
return ds; // 返回具体的分片数据库
}
}
工具类EvironmentUtil.java
public class EnvironmentUtil {
/**
* 返回逻辑数据源的读写分离规则
* @param module
* @return
*/
public static Set<MasterSlaveRuleConfiguration> fillDataSource( String module){
Set<MasterSlaveRuleConfiguration> masterSlaveRuleConfigurationSet = new HashSet<>();
String shardParam = EnvironmentConfig.environment.getProperty(CommonConstants.PREFIX + module + ".shard-count");
int shardCount = Objects.nonNull(shardParam) ? Integer.parseInt(shardParam) : 2;
for(int i=0;i<shardCount;i++){
MasterSlaveRuleConfiguration masterSlaveRuleConfig =
new MasterSlaveRuleConfiguration("ds_" + i, module + "-writeDataSource" + i, Arrays.asList(module + "-readDataSource" + i));
masterSlaveRuleConfigurationSet.add(masterSlaveRuleConfig);
}
return masterSlaveRuleConfigurationSet;
}
/**
* 返回物理数据源map
* @param module 业务模块名,ivc\mcc\rcc\smc
* @param applicationContext 应用上下文
* @return
*/
public static Map<String, DataSource> createDataSourceMap(String module, ApplicationContext applicationContext) {
Map<String, DataSource> result = new HashMap<>();
String shardParam = EnvironmentConfig.environment.getProperty(CommonConstants.PREFIX + module + ".shard-count");
int shardCount = Objects.nonNull(shardParam) ? Integer.parseInt(shardParam) : 2;
for(int i=0;i<shardCount;i++){
DataSource readDataSource = (DataSource)applicationContext.getBean(module + "-readDataSource" + i);
DataSource writeDataSource = (DataSource)applicationContext.getBean(module + "-writeDataSource" + i);
result.put(module + "-writeDataSource" + i, writeDataSource);
result.put(module + "-readDataSource" + i, readDataSource);
}
return result;
}
}
4、MybatisPlusConfig配置类,绑定sqlsessionFactory与sharding数据源
@Configuration
@AutoConfigureAfter(EnvironmentConfig.class)
public class MybatisPlusConfig {
@Bean
@ConditionalOnMissingBean({FileNameInterface.class})
public FileNameInterface defaultFileName() {
return new DefaultFileNameServiceImpl();
}
@Autowired
PaginationInterceptor paginationInterceptor;
// 注意冒号:表示当没有配置对应属性时,变量logImpl为null,即默认值null
@Value("${mybatis-plus.configuration.log-impl:}")
Class<? extends Log> logImpl;
// 依赖事务控制器的bean对象先注入spring容器
@Bean("rccShardingSqlSessionFactory")
//@DependsOn({"transactionManager"})
@Primary
public SqlSessionFactory rccShardingSqlSessionFactory(@Qualifier("rccShardingDataSource") DataSource dataSource, @Qualifier("globalConfiguration") GlobalConfig globalConfig) throws Exception {
// mapper接口的xml文件路径
String mapperLocations = "classpath:mapping/rcc/**/*.xml";
// 数据库表映射的实体类的包路径
String typeAliasesPackage = "com.midea.mcsp.settlement.rcc.entity.rcc";
return buildSqlSessionFactory(dataSource, typeAliasesPackage, mapperLocations, globalConfig);
}
// smc模块做数据分片的表操作会话工厂sqlsessionFactory绑定smc的分片数据源,不分片的表则走默认数据源
@Bean("smcShardingSqlSessionFactory")
//@DependsOn({"transactionManager"})
public SqlSessionFactory smcShardingSqlSessionFactory(@Qualifier("smcShardingDataSource") DataSource dataSource, @Qualifier("globalConfiguration") GlobalConfig globalConfig) throws Exception {
String mapperLocations = "classpath:mapping/smc/**/*.xml";
String typeAliasesPackage = "com.midea.mcsp.settlement.rcc.entity.smc";
return buildSqlSessionFactory(dataSource, typeAliasesPackage, mapperLocations, globalConfig);
}
@Bean("mccShardingSqlSessionFactory")
@DependsOn({"transactionManager"})
public SqlSessionFactory mccShardingSqlSessionFactory(@Qualifier("mccShardingDataSource") DataSource dataSource, @Qualifier("globalConfiguration") GlobalConfig globalConfig) throws Exception {
String mapperLocations = "classpath:mapping/mcc/**/*.xml";
String typeAliasesPackage = "com.midea.mcsp.settlement.rcc.entity.mcc";
return buildSqlSessionFactory(dataSource, typeAliasesPackage, mapperLocations, globalConfig);
}
@Bean
public GlobalConfig globalConfiguration(MetaObjectHandler myDbFeildFillHandler) {
GlobalConfig conf = new GlobalConfig();
conf.setBanner(false);
GlobalConfig.DbConfig dbConfig = new GlobalConfig.DbConfig();
dbConfig.setIdType(IdType.ASSIGN_ID);
dbConfig.setTableUnderline(true);
dbConfig.setLogicDeleteValue("1");
dbConfig.setLogicNotDeleteValue("0");
conf.setDbConfig(dbConfig);
conf.setMetaObjectHandler(myDbFeildFillHandler);
return conf;
}
@Bean("impalaSqlSessionFactory")
@DependsOn({"transactionManager"})
public SqlSessionFactory impalaSqlSessionFactory(@Qualifier("impalaDataSource") DataSource dataSource, @Qualifier("globalConfiguration") GlobalConfig globalConfig) throws Exception {
// mapper接口的xml文件路径
String mapperLocations = "classpath:mapping/impala/*.xml";
// 数据库表映射的实体类的包路径
String typeAliasesPackage = "com.midea.mcsp.settlement.rcc.entity.impala";
return buildSqlSessionFactory(dataSource, typeAliasesPackage, mapperLocations, globalConfig);
}
// 返回MybatisSqlSessionFactoryBean对象
private SqlSessionFactory buildSqlSessionFactory(DataSource dataSource, String typeAliasesPackage, String mapperLocations, GlobalConfig globalConfig) throws Exception {
MybatisSqlSessionFactoryBean sqlSessionFactory = new MybatisSqlSessionFactoryBean();
sqlSessionFactory.setDataSource(dataSource);
sqlSessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(mapperLocations));
sqlSessionFactory.setTypeAliasesPackage(typeAliasesPackage);
MybatisConfiguration configuration = new MybatisConfiguration();
configuration.setJdbcTypeForNull(JdbcType.NULL);
configuration.setMapUnderscoreToCamelCase(true);
configuration.setCacheEnabled(false);
//configuration.setLogImpl(org.apache.ibatis.logging.stdout.StdOutImpl.class);
if (logImpl != null) {
configuration.setLogImpl(logImpl);
}
sqlSessionFactory.setConfiguration(configuration);
sqlSessionFactory.setPlugins(new Interceptor[]{
paginationInterceptor
});
sqlSessionFactory.setGlobalConfig(globalConfig);
return sqlSessionFactory.getObject();
}
// JTA分布式事务管理器,数据源连接要使用AtomikosDataSourceBean
@Bean(name = "transactionManager")
public PlatformTransactionManager transactionManager() throws Throwable {
UserTransactionManager userTransactionManager = new UserTransactionManager();
userTransactionManager.setForceShutdown(false);
UserTransaction userTransaction = new UserTransactionImp();
userTransaction.setTransactionTimeout(10000);
return new JtaTransactionManager(userTransaction, userTransactionManager);
}
// 每个逻辑数据源对应一个事务控制器
@Bean
@Primary // 不指定事务控制器,使用该事务控制器
public PlatformTransactionManager rccManager(@Qualifier("rccShardingDataSource") DataSource dataSource){
return new DataSourceTransactionManager(dataSource);
}
@Bean
public PlatformTransactionManager smcManager(@Qualifier("smcShardingDataSource") DataSource dataSource){
return new DataSourceTransactionManager(dataSource);
}
@Bean
public PlatformTransactionManager mccManager(@Qualifier("mccShardingDataSource") DataSource dataSource){
return new DataSourceTransactionManager(dataSource);
}
}
这里有个impalaDataSource,连接到大数据的sql查询引擎impala,表数据存储在kudu,在application.yml配置连接
它的数据源配置类,注入到spring ioc容器
@Configuration
public class ImpalaDataSourceConfig {
@Value("${spring.datasource.impala.url}")
private String url;
@Value("${spring.datasource.impala.driver-class}")
private String driverClass;
@Value("${spring.datasource.impala.username}")
private String username;
@Value("${spring.datasource.impala.password}")
private String password;
@Bean("impalaDataSource")
public DataSource impalaDataSource() {
// minEvictableIdleTimeMillis 会话连接保持空闲而不被关闭的最小时间,默认30分钟
// maxEvictableIdleTimeMillis 会话连接保持空闲而不被关闭的最大时间,默认7小时
DruidDataSource dataSource = new DruidDataSource();
dataSource.setName("impala-datasource");
dataSource.setInitialSize(5);
dataSource.setMinIdle(3); // 连接池中容许保持空闲状态的最小连接数量,低于这个数量将创建新的连接,
dataSource.setMaxActive(20);
dataSource.setMaxWait(40000); // 连接等待时间,40秒还没有连接到数据库则连接失败
dataSource.setUrl(url);
dataSource.setDriverClassName(driverClass);
dataSource.setPassword(password);
dataSource.setUsername(username);
dataSource.setTimeBetweenEvictionRunsMillis(60000); // 每隔60秒检查空闲连接
dataSource.setMinEvictableIdleTimeMillis(60000); // 会话最小空闲时间,默认30分钟
dataSource.setMaxEvictableIdleTimeMillis(290000); // 会话最大空闲时间,默认7小时
dataSource.setValidationQuery("select 1"); // SQL查询,用来验证从连接池取出的连接,在将连接返回给调用者之前,验证连接是否可用,默认为空
dataSource.setTestWhileIdle(true); // 校验连接池中的空闲连接是否可用,默认true
dataSource.setTestOnBorrow(true); // 从连接池中取出连接前进行校验,如果校验失败,则从池中去除连接并尝试取出另一个,开启这个会消耗内存CPU资源,默认false
dataSource.setTestOnReturn(false); // 连接归还到连接池前不进行校验,默认false
//dataSource.setRemoveAbandoned(true); // 删除泄漏的连接,为避免误删正在使用的连接,官网建议生产环境上不开启
dataSource.setRemoveAbandonedTimeout(3600); // 60*60秒,如果从连接池获取的连接超过这个时间仍然未归还连接池,则被认为是泄漏连接,可以被删除回收释放资源,如果
// 业务超过这个时间还未处理完,则可适当延长这个时间,这个属性是针对活动连接的配置
dataSource.setLogAbandoned(true);// 标记当Statement或连接被泄露时是否打印程序的stack traces日志。
return dataSource;
}
}
5、MapperScanerConfig配置类,绑定mapper接口与sqlsessionFactory
@Configuration
@AutoConfigureAfter(MybatisPlusConfig.class)
public class MapperScannerConfig {
// rcc模块做数据分片的表操作的mapper接口绑定rcc的分片sqlSessionFactory
@Bean
public MapperScannerConfigurer RccShardingMapperScannerConfigurer() {
MapperScannerConfigurer mapperScannerConfigurer = new MapperScannerConfigurer();
// 扫描mapper接口的包路径
mapperScannerConfigurer.setBasePackage("com.midea.mcsp.settlement.rcc.mapper.rcc");
// 指定对应的sqlsessionFactory
mapperScannerConfigurer.setSqlSessionFactoryBeanName("rccShardingSqlSessionFactory");
mapperScannerConfigurer.setAnnotationClass(RccShardingRepository.class);
return mapperScannerConfigurer;
}
@Bean
public MapperScannerConfigurer SmcShardingMapperScannerConfigurer() {
MapperScannerConfigurer mapperScannerConfigurer = new MapperScannerConfigurer();
mapperScannerConfigurer.setBasePackage("com.midea.mcsp.settlement.rcc.mapper.smc");
mapperScannerConfigurer.setSqlSessionFactoryBeanName("smcShardingSqlSessionFactory");
mapperScannerConfigurer.setAnnotationClass(SmcShardingRepository.class);
return mapperScannerConfigurer;
}
@Bean
public MapperScannerConfigurer MccShardingMapperScannerConfigurer() {
MapperScannerConfigurer mapperScannerConfigurer = new MapperScannerConfigurer();
mapperScannerConfigurer.setBasePackage("com.midea.mcsp.settlement.rcc.mapper.mcc");
mapperScannerConfigurer.setSqlSessionFactoryBeanName("mccShardingSqlSessionFactory");
mapperScannerConfigurer.setAnnotationClass(MccShardingRepository.class);
return mapperScannerConfigurer;
}
// impala的kudu大数据表操作的mapper接口扫描
@Bean
public MapperScannerConfigurer ImpalaMapperScannerConfigurer() {
MapperScannerConfigurer mapperScannerConfigurer = new MapperScannerConfigurer();
mapperScannerConfigurer.setBasePackage("com.midea.mcsp.settlement.rcc.mapper.impala");
mapperScannerConfigurer.setSqlSessionFactoryBeanName("impalaSqlSessionFactory");
mapperScannerConfigurer.setAnnotationClass(ImpalaRepository.class);
return mapperScannerConfigurer;
}
}
最后,这个数据源配置的加载流程如下图:
整合dynamic动态数据源
pom.xml导入依赖
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>sharding-jdbc-spring-boot-starter</artifactId>
<version>4.1.1</version>
</dependency>
先说单逻辑数据源,以SMC的配置做参考
#分库分表配置
shardingsphere:
datasource:
# 数据库名称,可自定义,可以为多个,以逗号隔开,每个在这里定义的库,都要在下面定义连接属性
names: "mcsp-smc,mcsp-smc-00,mcsp-smc-01,mcsp-smc-02,mcsp-smc-03,mcsp-smc-04,mcsp-smc-05,mcsp-smc-06,mcsp-smc-07"
mcsp-smc:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-smc
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_smc?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_smc_user
password: fVdb51Kxo
#连接池中最小连接数
minimumIdle: 2
#连接池中最大连接数
maximumPoolSize: 100
#连接池中连接最大生命周期(创建,使用,空闲,销毁)
#MyCat是30分钟,后端Mysql是1个小时,应用配置为15分钟。
maxLifetime: 1800000
#连接池中获取连接等待最大时间
connectionTimeout: 6000
#连接池中连接空闲多久进行才能被回收
idleTimeout: 300000
#设置连接被占用的超时时间,每个SQL不能执行超过这个时间,
#Hikari与其他的数据库连接池不同,他会为每一个连接创建定时器,监听其执行时间。
#其他数据源则共享一个timerTask,设置间隔多久跑一次。
leakDetectionThreshold: 15000
mcsp-smc-00:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-smc-00
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_smc_00?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_smc_user
password: fVdb51Kxo
minimumIdle: 2
maximumPoolSize: 100
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-smc-01:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-smc-01
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_smc_01?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_smc_user
password: fVdb51Kxo
minimumIdle: 2
maximumPoolSize: 100
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-smc-02:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-smc-02
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_smc_02?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_smc_user
password: fVdb51Kxo
minimumIdle: 2
maximumPoolSize: 100
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-smc-03:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-smc-03
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_smc_03?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_smc_user
password: fVdb51Kxo
minimumIdle: 2
maximumPoolSize: 100
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-smc-04:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-smc-04
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_smc_04?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_smc_user
password: fVdb51Kxo
minimumIdle: 2
maximumPoolSize: 100
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-smc-05:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-smc-05
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_smc_05?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_smc_user
password: fVdb51Kxo
minimumIdle: 2
maximumPoolSize: 100
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-smc-06:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-smc-06
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_smc_06?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_smc_user
password: fVdb51Kxo
minimumIdle: 2
maximumPoolSize: 100
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-smc-07:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-st-07
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_smc_07?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_smc_user
password: fVdb51Kxo
minimumIdle: 2
maximumPoolSize: 100
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
sharding:
default-data-source-name: mcsp-smc
#需要拆分的表,可以设置多个
tables:
#需要进行分表的逻辑表名,用MyBatis或者MyBatis-Plus操作数据库时只需要操作逻辑表即可,xml文件也只需要配置逻辑表
#实际的表结点,下面代表的是t_order_为开头的所有表,如果能确定表的范围例如按月份分表,这里的写法是ds$->{2019..2020}.t_order_$->{2021..2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}表示12个月的表
intf_bd_result_back_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_bd_result_back_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
database-strategy:
complex:
sharding-columns: request_id
algorithm-class-name: com.midea.mcsp.settlement.smc.infrastructure.config.SmcShardingDatabaseComplexAlgorithm01
table-strategy:
complex:
sharding-columns: id
algorithm-class-name: com.midea.mcsp.settlement.smc.infrastructure.config.SmcShardingTableComplexAlgorithm
intf_bd_result_back_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_bd_result_back_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
database-strategy:
complex:
sharding-columns: request_id
algorithm-class-name: com.midea.mcsp.settlement.smc.infrastructure.config.SmcShardingDatabaseComplexAlgorithm01
table-strategy:
complex:
sharding-columns: id
algorithm-class-name: com.midea.mcsp.settlement.smc.infrastructure.config.SmcShardingTableComplexAlgorithm
intf_erp_icp_invoice_req_head:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_erp_icp_invoice_req_head_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
intf_erp_icp_invoice_req_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_erp_icp_invoice_req_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
intf_erp_icp_invoice_tax_contr:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_erp_icp_invoice_tax_contr_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
intf_erp_so_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_erp_so_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
intf_erp_so_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_erp_so_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
intf_nc_so_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_nc_so_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
intf_nc_so_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_nc_so_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
intf_settle_bill_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_settle_bill_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
intf_settle_bill_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_settle_bill_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
intf_settle_bill_line_detail:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.intf_settle_bill_line_detail_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_apply_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_apply_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_apply_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_apply_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_bill_adjust_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_bill_adjust_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_bill_adjust_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_bill_adjust_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_bill_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_bill_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_bill_header_his:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_bill_header_his_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_bill_header_extend:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_bill_header_extend_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_bill_header_extend_his:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_bill_header_extend_his_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_bill_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_bill_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_bill_line_detail:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_bill_line_detail_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_bill_line_detail_his:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_bill_line_detail_his_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_settle_bill_line_his:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_settle_bill_line_his_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_ec_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_ec_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_ec_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_ec_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_line_detail:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_line_detail_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_receipt_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_receipt_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_receipt_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_receipt_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_trx_adjust_apply:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_trx_adjust_apply_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_trx_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_trx_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_trx_header_relation:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_trx_header_relation_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_trx_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_trx_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_trx_line_relation:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_trx_line_relation_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_trx_oi_header:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_trx_oi_header_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_trx_oi_line:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_trx_oi_line_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
smc_so_trx_post_detail:
actual-data-nodes: mcsp-smc-$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.smc_so_trx_post_detail_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
binding-tables[0]: intf_erp_icp_invoice_req_head,intf_erp_icp_invoice_req_line
binding-tables[1]: intf_erp_so_header,intf_erp_so_line
binding-tables[2]: intf_nc_so_header,intf_nc_so_line
binding-tables[3]: intf_settle_bill_header,intf_settle_bill_line,intf_settle_bill_line_detail
binding-tables[4]: smc_settle_apply_header,smc_settle_apply_line
binding-tables[5]: smc_settle_bill_adjust_header,smc_settle_bill_adjust_line
binding-tables[6]: smc_so_ec_header,smc_so_ec_line
binding-tables[7]: smc_so_receipt_header,smc_so_receipt_line
binding-tables[8]: smc_so_trx_oi_header,smc_so_trx_oi_line,smc_so_trx_post_detail
binding-tables[9]: smc_so_trx_header,smc_so_trx_header_relation,smc_so_trx_line,smc_so_trx_line_relation
binding-tables[10]: smc_settle_bill_header,smc_settle_bill_header_extend,smc_settle_bill_header_extend_his,smc_settle_bill_header_his,smc_settle_bill_line,smc_settle_bill_line_detail,smc_settle_bill_line_detail_his,smc_settle_bill_line_his
binding-tables[11]: smc_so_header,smc_so_line,smc_so_line_detail
binding-tables[12]: intf_bd_result_back_header,intf_bd_result_back_line
#分库策略,按照创建时间的年份分库,如果不用分库的,直接注释掉分库相关的代码
default-database-strategy:
complex:
sharding-columns: tenant_code
algorithm-class-name: com.midea.mcsp.settlement.smc.infrastructure.config.SmcShardingDatabaseComplexAlgorithm
default-table-strategy:
#分表策略,根据自己的需要写的分表策略,这里我是根据car_park_id这个字段的值作为后缀,来确定放到哪张表
complex:
sharding-columns: id
algorithm-class-name: com.midea.mcsp.settlement.smc.infrastructure.config.SmcShardingTableComplexAlgorithm
他们的分库分表策略都是实现了ComplexKeysShardingAlgorithm
接口,其实更适用于多分片键的复合分片场景。应该使用StandardShardingStrategyConfiguration接口,单分片键的分片场景
public class SmcShardingDatabaseComplexAlgorithm implements ComplexKeysShardingAlgorithm<String> {
private final int DATA_BASE_SIZE = 8;
@Override
public Collection<String> doSharding(Collection<String> availableTargetNames, ComplexKeysShardingValue<String> shardingValue) {
Map<String, Collection<String>> columnNameAndShardingValuesMap = shardingValue.getColumnNameAndShardingValuesMap();
if(columnNameAndShardingValuesMap.containsKey("tenant_code")){
Collection<String> tenantCodeCol = columnNameAndShardingValuesMap.get("tenant_code");
String tenantCode =String.valueOf( tenantCodeCol.iterator().next());
long tenant_code_mod = Math.abs(tenantCode.hashCode()%DATA_BASE_SIZE);
for (String availableTargetName : availableTargetNames) {
if(availableTargetName.endsWith(String.format("%02d", tenant_code_mod))){
return Lists.newArrayList(availableTargetName);
}
}
}
throw new UnsupportedOperationException();
}
}
多逻辑数据源的场景
增加导入依赖包
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>dynamic-datasource-spring-boot-starter</artifactId>
<version>2.5.6</version>
</dependency>
以CDC工程为例,用到impala和mysql的分库两个数据源,默认数据源是mysql分库,application.yaml配置如下:
spring:
profiles:
active: dev
application:
name: mcsp-cdc-service-zsl
main:
allow-bean-definition-overriding: true
lifecycle:
timeout-per-shutdown-phase: 40s
datasource:
dynamic:
primary: sharding
strict: false
datasource:
impala:
jdbcUrl: jdbc:impala://10.18.25.72:24050/mcsp_kudu;AuthMech=3
url: jdbc:impala://10.18.25.72:24050/mcsp_kudu;AuthMech=3
username: mcsp_bind
password: f8Pi6Xp6Sp
driver-class-name: com.cloudera.impala.jdbc41.Driver
type: com.zaxxer.hikari.HikariDataSource
shardingsphere:
datasource:
# 数据库名称,可自定义,可以为多个,以逗号隔开,每个在这里定义的库,都要在下面定义连接属性
names: "mcsp-cdc,mcsp-cdc-slave,mcsp-cdc-00,mcsp-cdc-00-slave,mcsp-cdc-01,mcsp-cdc-01-slave, \
mcsp-cdc-02,mcsp-cdc-02-slave,mcsp-cdc-03,mcsp-cdc-03-slave,mcsp-cdc-04,mcsp-cdc-04-slave, \
mcsp-cdc-05,mcsp-cdc-05-slave,mcsp-cdc-06,mcsp-cdc-06-slave,mcsp-cdc-07,mcsp-cdc-07-slave"
mcsp-cdc:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
#连接池中最小连接数
minimumIdle: 10
#连接池中最大连接数
maximumPoolSize: 200
#连接池中连接最大生命周期(创建,使用,空闲,销毁)
#MyCat是30分钟,后端Mysql是1个小时,应用配置为15分钟。
maxLifetime: 1800000
#连接池中获取连接等待最大时间
connectionTimeout: 6000
#连接池中连接空闲多久进行才能被回收
idleTimeout: 300000
#设置连接被占用的超时时间,每个SQL不能执行超过这个时间,
#Hikari与其他的数据库连接池不同,他会为每一个连接创建定时器,监听其执行时间。
#其他数据源则共享一个timerTask,设置间隔多久跑一次。
leakDetectionThreshold: 15000
mcsp-cdc-slave:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-00:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-00
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_00?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-00-slave:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-00
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_00?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-01:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-01
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_01?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-01-slave:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-01
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_01?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-02:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-02
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_02?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-02-slave:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-02
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_02?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-03:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-03
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_03?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-03-slave:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-03
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_03?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-04:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-04
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_04?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-04-slave:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-04
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_04?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-05:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-05
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_05?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-05-slave:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-05
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_05?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-06:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-06
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_06?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-06-slave:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-cdc-06
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_06?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-07:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-st-07
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_07?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
mcsp-cdc-07-slave:
type: com.zaxxer.hikari.HikariDataSource
poolName: pool-mcsp-st-07
driverClassName: com.mysql.cj.jdbc.Driver
jdbcUrl: jdbc:mysql://10.16.92.106:3306/mcsp_cdc_07?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai&zeroDateTimeBehavior=convertToNull
username: mcsp_cdc_user
password: f5FpgUfb1
minimumIdle: 10
maximumPoolSize: 200
maxLifetime: 1800000
connectionTimeout: 6000
idleTimeout: 300000
leakDetectionThreshold: 15000
sharding:
master-slave-rules:
mcsp-cdc-r:
master-data-source-name: mcsp-cdc
slave-data-source-names: mcsp-cdc-slave
mcsp-cdc-r00:
master-data-source-name: mcsp-cdc-00
slave-data-source-names: mcsp-cdc-00-slave
mcsp-cdc-r01:
master-data-source-name: mcsp-cdc-01
slave-data-source-names: mcsp-cdc-01-slave
mcsp-cdc-r02:
master-data-source-name: mcsp-cdc-02
slave-data-source-names: mcsp-cdc-02-slave
mcsp-cdc-r03:
master-data-source-name: mcsp-cdc-03
slave-data-source-names: mcsp-cdc-03-slave
mcsp-cdc-r04:
master-data-source-name: mcsp-cdc-04
slave-data-source-names: mcsp-cdc-04-slave
mcsp-cdc-r05:
master-data-source-name: mcsp-cdc-05
slave-data-source-names: mcsp-cdc-05-slave
mcsp-cdc-r06:
master-data-source-name: mcsp-cdc-06
slave-data-source-names: mcsp-cdc-06-slave
mcsp-cdc-r07:
master-data-source-name: mcsp-cdc-07
slave-data-source-names: mcsp-cdc-07-slave
default-data-source-name: mcsp-cdc-r
#分库策略,按照创建时间的年份分库,如果不用分库的,直接注释掉分库相关的代码
default-database-strategy:
complex:
sharding-columns: tenant_code
algorithm-class-name: com.midea.mcsp.settlement.cdc.infrastructure.config.CdcShardingDatabaseComplexAlgorithm
default-table-strategy:
# 分表策略,根据自己的需要写的分表策略,这里我是根据car_park_id这个字段的值作为后缀,来确定放到哪张表
complex:
sharding-columns: id
algorithm-class-name: com.midea.mcsp.settlement.cdc.infrastructure.config.CdcShardingTableComplexAlgorithm
#需要拆分的表,可以设置多个
tables:
#需要进行分表的逻辑表名,用MyBatis或者MyBatis-Plus操作数据库时只需要操作逻辑表即可,xml文件也只需要配置逻辑表
cdc_account_amount_record:
#实际的表结点,下面代表的是t_order_为开头的所有表,如果能确定表的范围例如按月份分表,这里的写法是ds$->{2019..2020}.t_order_&->{01..12}表示12个月的表
actual-data-nodes: mcsp-cdc-r$->{(0..7).collect{t ->t.toString().padLeft(2,'0')}}.cdc_account_amount_record_$->{2021}$->{(1..12).collect{t ->t.toString().padLeft(2,'0')}}
props:
sql:
show: true