mvn test命令插件

This commit is contained in:
法然
2022-12-05 19:40:44 +08:00
parent ea02f1c503
commit 973182e8f2
5 changed files with 686 additions and 45 deletions

View File

@@ -56,7 +56,7 @@ mvn compiler:compile
```
## 2 常用插件
### 常用插件
分为两类
@@ -73,50 +73,6 @@ mvn compiler:compile
* javadoc 为工程生成 Javadoc。
* antrun 从构建过程的任意一个阶段中运行一个 ant 任务的集合。
### help插件:分析依赖
![](image/2022-11-05-23-03-56.png)
### archetype插件:创建工程
```
mvn archetype:generate -DgroupId=com.ykl -DartifactId=project04-maven-import -DarchetypeArtifactId=maven-archetype-quickstart -Dversion=0.0.1-snapshot
```
### dependcy:依赖管理和分析
* 查看依赖列表
```
mvn dependcy:list
mvn dependcy:tree
```
### spring-boot-maven-plugin
spring-boot-maven-plugin是spring boot提供的maven打包插件。可打直接可运行的jar包或war包。
```xml
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
```
插件提供了6个maven goal
* build-info生成项目的构建信息文件 build-info.properties
* help用于展示spring-boot-maven-plugin的帮助信息。使用命令行mvn spring-boot:help -Ddetail=true -Dgoal=<goal-name>可展示goal的参数描述信息。
* repackage可生成可执行的jar包或war包。插件的核心goal。
* run运行 Spring Boot 应用
* start在集成测试阶段控制生命周期
* stop在集成测试阶段控制生命周期
## 2 自定义插件
### 创建工程
@@ -272,3 +228,117 @@ mvn hello:sayHello
## 3 help插件:分析依赖
![](image/2022-11-05-23-03-56.png)
## 4 archetype插件:创建工程
```
mvn archetype:generate -DgroupId=com.ykl -DartifactId=project04-maven-import -DarchetypeArtifactId=maven-archetype-quickstart -Dversion=0.0.1-snapshot
```
## 5 dependcy:依赖管理和分析
* 查看依赖列表
```
mvn dependcy:list
mvn dependcy:tree
```
## 6 spring-boot-maven-plugin
spring-boot-maven-plugin是spring boot提供的maven打包插件。可打直接可运行的jar包或war包。
```xml
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
```
插件提供了6个maven goal
* build-info生成项目的构建信息文件 build-info.properties
* help用于展示spring-boot-maven-plugin的帮助信息。使用命令行mvn spring-boot:help -Ddetail=true -Dgoal=<goal-name>可展示goal的参数描述信息。
* repackage可生成可执行的jar包或war包。插件的核心goal。
* run运行 Spring Boot 应用
* start在集成测试阶段控制生命周期
* stop在集成测试阶段控制生命周期
## 7 surefire插件
### 简介
如果你执行过mvn test或者执行其他maven命令时跑了测试用例你就已经用过maven-surefire-plugin了。 maven-surefire-plugin是maven里执行测试用例的插件不显示配置就会用默认配置。 这个插件的surefire:test命令会默认绑定maven执行的test阶段。
如果你自己声明了,那么可以指定自己的版本,并且可以配置自定义的参数。
### 导入
```xml
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M5</version>
</plugin>
</plugins>
</pluginManagement>
</build>
```
运行命令mvn test即可使用此插件。
### 源码详解
前面通过学习知道Mojo是运行的核心类,而SurefirePlugin就是Mojo的子类。 由此可知,如果要学习这个 maven-surefire-plugin入口就是在SurefirePlugin类。
![](image/2022-12-05-19-38-40.png)
![](image/2022-12-05-19-38-23.png)
![](image/2022-12-05-19-38-13.png)
### 常用参数
| 是否常用 | 参数名 | 使用方法 | 解释 |
|---|---|---|---|
| 常用 | skipTests | -D,或者xml配置标签 | 用于跳过单测 |
| 常用 | maven.test.skip.exec | -D,或者xml配置标签 | 用于跳过单测 |
| 常用 | maven.test.skip | -D,或者xml配置标签 | 用于跳过单测 |
| 不常用 | testClassesDirectory | xml配置标签 | 指定测试模块目录编译后目录 |
| 不常用 | maven.test.dependency.excludes | -D,或者xml配置标签 | 要排除的依赖,格式:groupId:artifactId |
| 不常用 | maven.test.additionalClasspath | -D,或者xml配置标签 | 追加classpath |
| 不常用 | project.build.testSourceDirectory | xml配置标签 | 指定测试模块目录源码目录 |
| 不常用 | excludes | xml配置 | 指定规则的类不需要被单测eg: **/*Test.java |
| 不常用 | surefire.reportNameSuffix | -D,或者xml配置标签 | test报表后缀 |
| 不常用 | maven.test.redirectTestOutputToFile | -D,或者xml配置标签 | 运行的单侧输出重定向到report目录中 |
| 不常用 | failIfNoTests | -D,或者xml配置标签 | 如果没有单测就报错 |
| 不常用 | forkMode | -D,或者xml配置标签 | 运行模式 |
| 不常用 | jvm | -D,或者xml配置标签 | 指定jvm目录,如果不指定会读取系统 |
| 不常用 | argLine | -D,或者xml配置标签 | Jvm运行参数 |
| 不常用 | threadCount | -D,或者xml配置标签 | 线程数 |
| 不常用 | forkCount | -D,或者xml配置标签 | 指定启用多少个vm,1.5C 以数字结尾,数字乘以cpu核心数 |
| 不常用 | reuseForks | -D,或者xml配置标签 | 是否可重新使用forks进程 |
| 不常用 | disableXmlReport | -D,或者xml配置标签 | 禁用xml报告 |
| 不常用 | enableassertions | -D,或者xml配置标签 | 启用断言assert语句 |
forkMode 可设置值有 “never” “once” “always” 和 “pertest”。
* pretest 每一个测试创建一个新进程为每个测试创建新的JVM是单独测试的最彻底方式但也是最慢的不适合hudson上持续回归。
* once在一个进程中进行所有测试。once为默认设置在Hudson上持续回归时建议使用默认设置。
* always在一个进程中并行的运行脚本Junit4.7以上版本才可以使用surefire的版本要在2.6以上提供这个功能,

Binary file not shown.

After

Width:  |  Height:  |  Size: 434 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 419 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

571
test.sh Normal file
View File

@@ -0,0 +1,571 @@
# cat zclean.sh
#!/bin/bash
# version: 0.9.2
export LANG=C
export PATH=/sbin:/bin:/usr/local/sbin:/usr/sbin:/usr/local/bin:/usr/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/home/admin/bin
readonly LOGS_DIR=/home/admin/logs/
readonly CONF_FILE=/home/admin/conf/zclean.conf
# 兼容逻辑,部分老应用通过云游部署会带上 app.env 等前缀,这里为了处理这种情况用 grep 来过滤
RESERVE=`([[ -f /opt/antcloud/conf/env.file ]] && cat /opt/antcloud/conf/env.file || env) |grep ZCLEAN_RESERVE_DAYS | awk -F= '{print $2}'`
RESERVE=${RESERVE:-14}
MAX_LOG_DIR_SIZE=`([[ -f /opt/antcloud/conf/env.file ]] && cat /opt/antcloud/conf/env.file || env) |grep ZCLEAN_MAX_LOG_DIR_SIZE | awk -F= '{print $2}'`
MAX_LOG_DIR_SIZE=${MAX_LOG_DIR_SIZE:-100} # unit is G
DELETE_FLAG='-delete'
DEBUG=''
CHUNK_SIZE=''
INTERACTIVE=0
ZCLEAN_DIGEST="${LOGS_DIR}/zclean.log.$(date +%F)"
{
readonly ZCLEAN_OK=1
readonly ZCLEAN_CRUSH=2
readonly ZCLEAN_ERROR=3
readonly ZCLEAN_IGNORE=4
}
[[ ! -d $LOGS_DIR ]] && exit
CMD_PREFIX=''
if $(which ionice >& /dev/null); then
CMD_PREFIX="ionice -c3 "
fi
if $(which nice >& /dev/null); then
CMD_PREFIX="nice -n 19 $CMD_PREFIX"
fi
FIND_CMD="${CMD_PREFIX}find"
RM_CMD="${CMD_PREFIX}rm"
TRUNCATE_CMD=''
if $(which truncate >& /dev/null); then
TRUNCATE_CMD="${CMD_PREFIX}truncate"
fi
LSOF_CMD=''
if $(which lsof >& /dev/null); then
LSOF_CMD="lsof"
fi
LSOF_FILE=/tmp/zclean_lsof.out
if [[ -d /dev/shm ]]; then
shm_mode=$(stat -c "%A" /dev/shm)
if [[ $shm_mode == drwxrwxrwt ]]; then
LSOF_FILE=/dev/shm/zclean_lsof.out
fi
fi
prepare_lsof() {
# walkaroud for Alios7 kenrel bug
if [[ $HOSTNAME =~ paycorecloud-30- ]]; then
FIND_CMD $LOGS_DIR -name '*.log' > $LSOF_FILE
return
fi
if [[ $HOSTNAME =~ paycorecloud-31- ]]; then
FIND_CMD $LOGS_DIR -name '*.log' > $LSOF_FILE
return
fi
if [[ -n $LSOF_CMD ]]; then
ulimit -n 1024
$LSOF_CMD +D $LOGS_DIR 2> /dev/null > $LSOF_FILE
fi
}
delete_lsof() {
$RM_CMD -rf $LSOF_FILE
}
# only return true when all ready
file_in_lsof() {
local fpath=$1
if [[ -n $LSOF_CMD && -f $LSOF_FILE ]]; then
grep -q $fpath $LSOF_FILE
return $?
else
return 1
fi
}
log_error() {
echo $(date +"%F %T") [ERROR] $@ >> $ZCLEAN_DIGEST
}
log_info() {
echo $(date +"%F %T") [INFO] $@ >> $ZCLEAN_DIGEST
}
log_warn() {
echo $(date +"%F %T") [WARN] $@ >> $ZCLEAN_DIGEST
}
log_debug() {
[[ $DEBUG != '-debug' ]] && return
echo $(date +"%F %T") [DEBUG] $@ >> $ZCLEAN_DIGEST
}
delete_files() {
[[ $DELETE_FLAG != '-delete' ]] && return
$RM_CMD -rf "$@" >& /dev/null
}
crush_files() {
[[ $DELETE_FLAG != '-delete' ]] && return
for f in "$@"; do
> $f
done
}
clean_file() {
# eliminates file in a low-speed way (default: 20MB/S)
local fpath=$1
local fsize=$2
local chunksize=${CHUNK_SIZE:-20}
if [[ $DELETE_FLAG != '-delete' || ! -f $fpath ]]; then
return $ZCLEAN_ERROR
fi
local is_open=0
if file_in_lsof $fpath >& /dev/null; then
is_open=1
fi
if [[ $is_open -eq 1 && $fsize -eq 0 ]]; then
log_debug "ignore $fpath(+) size $fsize"
return $ZCLEAN_IGNORE
fi
if [[ $chunksize -eq 0 || -z $TRUNCATE_CMD ]]; then
# fast delete
if [[ $is_open -eq 1 ]]; then
crush_files $fpath
log_debug "removed $fpath(+) size $fsize directly"
else
delete_files $fpath
log_debug "removed $fpath size $fsize directly"
fi
else
# slow delete
local tstart=$SECONDS
local tstake=$((1+tstart))
local loop=$((fsize/(1048576*chunksize)+1))
local tdiff
if [[ $fsize -eq 0 ]]; then
loop=0
fi
for ((i=0; i<loop; ++i)); do
$TRUNCATE_CMD -s "-${chunksize}M" $fpath
tdiff=$((tstake-SECONDS))
if [[ $tdiff -gt 0 ]]; then
sleep $tdiff
fi
tstake=$((tstake+1))
done
if [[ $is_open -eq 1 ]]; then
log_debug \
"removed $fpath(+) size $fsize in $((SECONDS-tstart)) seconds"
else
log_debug \
"removed $fpath size $fsize in $((SECONDS-tstart)) seconds"
fi
fi
# here a time delta between lsof and remove
if [[ -n $LSOF_CMD && $is_open -eq 0 ]]; then
delete_files $fpath
return $ZCLEAN_OK
else
return $ZCLEAN_CRUSH
fi
}
get_home_usage() {
local usage
#usage=$(df $LOGS_DIR|awk 'END {print $5}'|tr -d '%')
usage=$(df $LOGS_DIR|tail -n 1|awk '{print $(NF-1)}'|sed -e 's/%//g')
if [[ -z $usage ]]; then
log_error "can't get home partition usage"
exit 1
fi
usage_by_du=`du -sk $LOGS_DIR | awk '{print $1}'`
usage_by_du=$(( (usage_by_du * 100) / (MAX_LOG_DIR_SIZE * 1024 * 1024) ))
if [[ $usage_by_du -gt $usage ]]; then
log_info "calculate usage based on MAX_LOG_DIR_SIZE, MAX_LOG_DIR_SIZE: $MAX_LOG_DIR_SIZE, usage: $usage_by_du"
usage=$usage_by_du
fi
echo $usage
}
sleep_dif()
{
local secs idc index
if [[ $HOSTNAME =~ ^[a-z0-9]+-[0-9]+-[0-9]+$ ]]; then
idc=$(echo $HOSTNAME|awk -F- '{print $2}')
index=$(echo $HOSTNAME|awk -F- '{print $3}')
secs=$(( (index*19 +idc*7)%233 ))
else
secs=$((RANDOM%133))
fi
sleep $secs
log_info slept $secs seconds
}
clean_expired() {
local keep_days=$((RESERVE-1))
if [[ $HOSTNAME =~ paycorecloud-30- ]]; then
keep_days=1
fi
local fpath fsize fmtime how_long expired
local ret_code=$ZCLEAN_OK
$FIND_CMD $LOGS_DIR \
-type f \
-name '*log*' \
! -name '*\.[0-9]dt\.log*' \
! -name '*\.[0-9][0-9]dt\.log*' \
! -name '*\.[0-9][0-9][0-9]dt\.log*' \
-mtime +$keep_days \
-printf '%p %s\n' | \
while read fpath fsize; do
clean_file $fpath $fsize
ret_code=$?
if [[ $ret_code -eq $ZCLEAN_OK || $ret_code -eq $ZCLEAN_CRUSH ]]; then
log_info "deleted expired file $fpath size $fsize"
fi
done
# http://doc.alipay.net/pages/viewpage.action?pageId=71187095
$FIND_CMD $LOGS_DIR \
-type f \
\( -name '*\.[0-9]dt\.log*' -o \
-name '*\.[0-9][0-9]dt\.log*' -o \
-name '*\.[0-9][0-9][0-9]dt\.log*' \) \
-printf '%p %s %TY-%Tm-%Td\n' | \
while read fpath fsize fmtime; do
how_long=$(echo $fpath | grep -o '[0-9]\+dt' | tr -d '[a-z]')
expired=$(date -d"$how_long days ago" +"%F")
if [[ $fmtime > $expired ]]; then
continue
else
clean_file $fpath $fsize
ret_code=$?
if [[ $ret_code -eq $ZCLEAN_OK || $ret_code -eq $ZCLEAN_CRUSH ]]; then
log_info "deleted expired file $fpath size $fsize"
fi
fi
done
}
clean_huge() {
local blocks big_size fpath fsize
blocks=$(df /home -k|awk 'END {print $2}')
if [[ ! $? ]]; then
log_error "can't get home partition total size"
exit 1
fi
if [[ $blocks -ge ${MAX_LOG_DIR_SIZE}*1024*1024 ]]; then
blocks=$(( MAX_LOG_DIR_SIZE*1024*1024 ))
fi
# 120G
if [[ $blocks -ge 125829120 ]]; then
big_size=50G
else
big_size=30G
fi
$FIND_CMD $LOGS_DIR \
-type f \
-name '*log*' \
-size +$big_size \
-printf '%p %s\n' | \
while read fpath fsize; do
crush_files "$fpath"
log_warn "deleted huge file $fpath size $fsize"
done
}
clean_by_day() {
local how_long=$1
local ret_code=$ZCLEAN_OK
$FIND_CMD $LOGS_DIR \
-type f \
-name '*log*' \
-mtime "+${how_long}" \
-printf '%p %s\n' | \
while read fpath fsize; do
clean_file $fpath $fsize
ret_code=$?
if [[ $ret_code -eq $ZCLEAN_OK || $ret_code -eq $ZCLEAN_CRUSH ]]; then
log_info "deleted $((how_long+1)) days ago file $fpath size $fsize"
fi
done
}
clean_by_hour() {
local how_long=$1
local ret_code=$ZCLEAN_OK
$FIND_CMD $LOGS_DIR \
-type f \
-name '*log*' \
-mmin "+$((how_long*60))" \
-printf '%p %s\n' | \
while read fpath fsize; do
clean_file $fpath $fsize
ret_code=$?
if [[ $ret_code -eq $ZCLEAN_OK || $ret_code -eq $ZCLEAN_CRUSH ]]; then
log_info "deleted $how_long hours ago file $fpath size $fsize"
fi
done
}
clean_largest() {
local fsize fpath fblock
local ret_code=$ZCLEAN_OK
$FIND_CMD $LOGS_DIR \
-type f \
-printf '%b %s %p\n' | \
sort -nr | head -1 | \
while read fblock fsize fpath ; do
# 10G
if [[ $fsize -gt 10737418240 ]]; then
crush_files $fpath
else
clean_file $fpath $fsize
fi
ret_code=$?
if [[ $ret_code -eq $ZCLEAN_OK || $ret_code -eq $ZCLEAN_CRUSH ]]; then
log_info "deleted largest file $fpath size $fsize"
fi
done
}
in_low_traffic() {
local now=$(date '+%R')
if [[ "$now" > "04:00" && "$now" < "04:30" ]]; then
return 0
else
return 1
fi
}
clean_until() {
local from_rate to_rate cur_usage old_usage how_long count force
how_long=$((RESERVE-1))
from_rate=$1
to_rate=$2
force=$3
count=0
cur_usage=$(get_home_usage)
# should exist some huge files
if [[ $cur_usage -ge 97 ]]; then
clean_huge
old_usage=$cur_usage
cur_usage=$(get_home_usage)
if [[ $cur_usage -ne $old_usage ]]; then
log_info "usage from $old_usage to $cur_usage"
fi
fi
if ! in_low_traffic; then
[[ $cur_usage -lt $from_rate ]] && return
fi
prepare_lsof
clean_expired
old_usage=$cur_usage
cur_usage=$(get_home_usage)
if [[ $cur_usage -ne $old_usage ]]; then
log_info "usage from $old_usage to $cur_usage"
fi
# now we have to remove recent logs by date
while [[ $cur_usage -gt $to_rate ]]; do
if [[ $how_long -lt 1 ]]; then
break
else
how_long=$((how_long-1))
fi
clean_by_day $how_long
old_usage=$cur_usage
cur_usage=$(get_home_usage)
if [[ $cur_usage -ne $old_usage ]]; then
log_info "usage from $old_usage to $cur_usage"
fi
done
# in hours
how_long=24
while [[ $cur_usage -gt $to_rate ]]; do
if [[ $how_long -lt 2 ]]; then
break
else
how_long=$((how_long-1))
fi
clean_by_hour $how_long
old_usage=$cur_usage
cur_usage=$(get_home_usage)
if [[ $cur_usage -ne $old_usage ]]; then
log_info "usage from $old_usage to $cur_usage"
fi
done
[[ $force -ne 1 ]] && return
# last resort, find top size logs to deleted
if [[ $CHUNK_SIZE -ne 0 ]]; then
CHUNK_SIZE=100
fi
while [[ $cur_usage -gt $to_rate ]]; do
if [[ $count -gt 5 ]]; then
log_error "give up deleting largest files"
break
fi
count=$((count+1))
clean_largest
old_usage=$cur_usage
cur_usage=$(get_home_usage)
if [[ $cur_usage -ne $old_usage ]]; then
log_info "usage from $old_usage to $cur_usage"
fi
done
delete_lsof
}
ensure_unique() {
local pgid=$(ps -p $$ -o pgid=)
local pids=$(ps -e -o pid,pgid,cmd | \
grep [z]clean | grep bash | \
awk "\$2 != $pgid {print \$1}")
if [[ -n $pids ]]; then
if [[ $INTERACTIVE -eq 1 ]]; then
kill $pids
else
log_info "$0 is running, wait for another round of dispatch"
exit 0
fi
fi
}
_main() {
local to_rate=90
local from_rate=$to_rate
local do_sleep=0
local force=0
# load config
if [[ -f $CONF_FILE && ! "$*" =~ --noconf ]]; then
while read -r line; do
key=$(echo $line|cut -d= -f1)
value=$(echo $line|cut -d= -f2)
case $key in
to)
to_rate=$value;;
block)
CHUNK_SIZE=$value;;
fast)
CHUNK_SIZE=0;;
from)
from_rate=$value;;
max_size)
MAX_LOG_DIR_SIZE=$value;;
sleep)
do_sleep=1;;
debug)
DEBUG='-debug';;
force)
force=1;;
*)
;;
esac
done < $CONF_FILE
fi
# option help
# -r clean to this ratio
# -b wipe this blocksize each time
# -t start cleaning when above this ratio
# -m max size of log dirunit is G
# -n fast delete (use rm -rf)
# -s random sleep awhile in a app clusters
# -d extra debug logging
# -f force delete largest file
while getopts ":r:b:t:nsdfi" opt; do
case $opt in
r)
if [[ ! $OPTARG =~ ^[0-9]+$ ]]; then
echo "$0: rate $OPTARG is an invalid number" >&2
exit 1;
fi
if [[ $OPTARG -le 1 || $OPTARG -ge 99 ]]; then
echo "$0: rate $OPTARG out of range (1, 99)" >&2
exit 1;
fi
to_rate=$OPTARG ;;
b)
if [[ ! $OPTARG =~ ^[0-9]+[mMgG]?$ ]]; then
echo "$0: block size $OPTARG is invalid" >&2
exit 1;
fi
if [[ $OPTARG =~ [gG]$ ]]; then
CHUNK_SIZE=$(echo $OPTARG|tr -d 'gG')
CHUNK_SIZE=$((CHUNK_SIZE*1024))
else
CHUNK_SIZE=$(echo $OPTARG|tr -d 'mM')
fi ;;
t)
if [[ ! $OPTARG =~ ^[0-9]+$ ]]; then
echo "$0: rate $OPTARG is an invalid number" >&2
exit 1;
fi
if [[ $OPTARG -le 1 || $OPTARG -ge 99 ]]; then
echo "$0: rate $OPTARG out of range (1, 99)" >&2
exit 1;
fi
from_rate=$OPTARG ;;
m)
if [[ ! $OPTARG =~ ^[0-9]+$ ]]; then
echo "$0: max size $OPTARG is invalid" >&2
exit 1;
fi
MAX_LOG_DIR_SIZE=$OPTARG ;;
n)
CHUNK_SIZE=0 ;;
s)
do_sleep=1 ;;
d)
DEBUG='-debug' ;;
f)
force=1 ;;
i)
INTERACTIVE=1 ;;
\?)
echo "$0: invalid option: -$OPTARG" >&2
exit 1;;
:)
echo "$0: option -$OPTARG requires an argument" >&2
exit 1 ;;
esac
done
if [[ $to_rate -ge $from_rate ]]; then
to_rate=$from_rate
fi
ensure_unique
[[ $do_sleep -eq 1 ]] && sleep_dif
clean_until $from_rate $to_rate $force
}
# TODO make a decision whether /home/admin is innocent
# TODO deamonize
_main "$@"