【TI毫米波雷达】DCA1000不使用mmWave Studio的数据采集方法,以及自动化实时数据采集
【TI毫米波雷达】DCA1000不使用mmWave Studio的数据采集方法,以及自动化实时数据采集
mmWave Studio提供的功能完全够用了 不用去纠结用DCA1000低延迟、无GUI传数据 速度最快又保证算力无非就是就是Linux板自己写驱动做串口和UDP 做雷达产品应用也不会采用DCA1000的方案 有这时间不如研究一下TI驱动如何写!
文章目录
- 前言
- DCA1000配置(IWR6843AOP为例)
- 开发板硬件配置
- DCA1000硬件配置
- mmWave Studio数据采集
- 环境
- 配置
- Connection
- StaticCongfig
- DataConfig
- SensorConfig
- 报错解决方案
- 实时采集
- 不使用mmWave Studio进行数据采集
- DCA1000的控制
- 雷达板的控制
- 实时采集
- 附录:结构框架
- 雷达基本原理叙述
- 雷达天线排列位置
- 芯片框架
- Demo工程功能
- CCS工程导入
- 工程叙述
- Software Tasks
- Data Path
- Output information sent to host
- List of detected objects
- Range profile
- Azimuth static heatmap
- Azimuth/Elevation static heatmap
- Range/Doppler heatmap
- Stats information
- Side information of detected objects
- Temperature Stats
- Range Bias and Rx Channel Gain/Phase Measurement and Compensation
- Streaming data over LVDS
- Implementation Notes
- How to bypass CLI
- Hardware Resource Allocation
前言
碰到不止一个问我用DCA1000做毕设的了
一般都是导师给的任务 如图:
sar雷达成像就不说了 更有甚者 做个目标识别 人员计数还要用DCA1000?
【TI毫米波雷达】雷达开发入门浅谈,为什么说用DCA1000做毕设要延毕?(以IWR6843AOP为例)
DCA1000是毫米波雷达初期开发时 不熟悉TI生态 用来采集数据、仿真使用的
只能配合开发板和官方的mmWave Studio软件使用
输出的数据都帮你做好了数据解析、信号处理等步骤
可以做到完全不用写一行代码就实现毫米波雷达的射频配置仿真
另外 也可以拿原始数据做MATLAB数据处理、算法仿真
但在应用方面 完全无法胜任实时性
若要做到产品应用的实时性要求 必须在设计好射频、算法后 将这部分移植到TI毫米波雷达生态中 采用CCS进行毫米波雷达开发 再将代码烧录雷达里面!!!
如果你的导师真的要你用DCA1000做毕设 要么就是导师太蠢完全不懂雷达 要么就是没有产品应用的实时性要求 只有算法仿真要求 如果是后者 问清楚 随便应付一下就得了
如果非要用DCA1000开发 那么可以看后文
DCA1000配置(IWR6843AOP为例)
开发板硬件配置
IWR6843AOPEVM-G版本可以直接与DCA1000EVM连接 进行数据获取
不需要连接MMWAVEICBOOST版
直接使用 DCA1000+mmWave Studio 软件进行数据采集
在官方手册中 User’s Guide 60GHz 毫米波传感器EVM 有相关模式的开关配置 但其实是错的
应采用如下ti上的配置:
dev.ti.com/tirex/explore/node?a=VLyFKFf__4.12.0&node=A__AGTrhNYW8jE6cMxbovlfaA__com.ti.mmwave_industrial_toolbox__VLyFKFf__4.12.0
DCA1000和软件使用方法参考手册 ZHCAB69-12.2021 使用低速串行总线的实时ADC原始数据采集方法
DCA1000硬件配置
DCA1000的硬件配置看这个手册 SPRUIK7-05.2018 DCA1000EVM Quick Start Guide
尤其要注意配置静态IP地址:
IWR6843AOPEVM不需要连串口线 但要连供电线
其实 IWR6843AOPEVM的开关就是要路由到60pin的引脚那里
官方文档错误很多
另外 在使用IWR6843AOPEVM-G的DCA1000EVM模式时 电脑上的端口是没有XDS的 这一点手册也不符合
调试好后 在打开mmWave Studio软件就像数据获取就好了
如果硬件配置成功 则可以通过mmWave Studio的Output看到配置信息
mmWave Studio数据采集
硬件方面连接好以后 就可以打开mmWave Studio了
环境
需要安装mmWave SDK
以及MATLAB runtime
是运行项目所需要的库,没装它项目会运行不了
ww2.mathworks.cn/products/compiler/matlab-runtime.html
最新版可能有bug
推荐下载安装MCR_R2015aSP1_win32_installer.exe
in.mathworks.com/supportfiles/downloads/R2015a/deployment_files/R2015aSP1/installers/win32/MCR_R2015aSP1_win32_installer.exe
配置
如果硬件配置成功 则可以通过mmWave Studio的Output看到配置信息
按照如图步骤 在radar api里面一步步来操作
Connection
其中 RS232选择波特率115200 enhanced端口
BSS FW在路径:
D:\TI_SDK\mmwave_studio_02_01_01_00\rf_eval_firmware\radarss
MSS FW在:
D:\TI_SDK\mmwave_studio_02_01_01_00\rf_eval_firmware\masterss
选择好以后 点击加载
最后再点击SPI连接
其中 雷达频率选择60GHz
连上以后:
StaticCongfig
按照蓝色按钮标记,根据自己需求 更改参数配置,并一一点击set按钮配置。
DataConfig
SensorConfig
按提示配置完参数后即可开始采集。
首先点击1️⃣DCA1000ARM,其实点击2️⃣Trigger Frame 便开始对雷达数据进行采集,最后采集完成,点击3️⃣PostProc便可查看雷达的数据处理后的效果图。采集到的原始数据存在4️⃣adc_data.bin文件中。
若没有数据产生 则肯定是没有连接到DCA1000ARM 需要点击左侧的SetUp DCA1000进行连接 同时 防火墙权限开到最大 或者干脆关闭防火墙
在进行第一步后 建议等待两秒再开始采集
tirrger frame就是开始采集 stop frame就是停止采集
生成的bin文件大小由配置决定 但数据由采集场景和时间等决定
雷达数据:
生成的数据:
报错解决方案
连接DCA1000时 Output报错 无法读取FPGA版本号等数据
Timeout Error! System disconnected
这是在连接FPGA时报错 网口的问题 防火墙设置为专用和公用网络都运行
或者IP没配置好
或者直接关闭防火墙
或者换个网口
连接SPI时报错 Output报错
MSS Power Up async event was not received!
检查DCA1000供电
将板子用UniFlash擦除后重连
或板子没有将开关设置为SPI模式 设置一下
实时采集
有两种方案:
- 配置雷达帧数尽可能少 并开启雷达 在采集过程中 不断生成bin文件 并且不断进行bin文件的解析和处理
- 配置雷达帧数尽可能多(或者无限) 并开启雷达 在采集过程中 不断对UDP协议进行抓包 并进行解析
这两种实时采集的方法 都离不开对mmWave Studio的lua脚本编写
在mmWave Studio中可以导入lua脚本
另外 相关配置也可以通过csv文件进行保存和读取
重点在于frame配置
No of Frames表示一次采集多少帧
如果用方法1 则可以配置为10帧、20帧皆可(大约1s)
也就是每1s生成一次bin文件 并进行一次解析 或者每隔几秒钟生成一次都行
如果用方法2 则可以根据需求配置为512帧以上(或为0 表示一直进行下去)
512帧算下来至少就是20s 可以根据实际需求进行配置
通过Trigger Frame开启采集 在采集过程中 即可通过UDP解包来完成实时数据获取和解析
不使用mmWave Studio进行数据采集
其实可以直接通过串口控制DCA1000和雷达 但但官方也提供了API接口应用程序 则可以直接使用shell脚本进行操作
DCA1000的控制
官方提供了DCA1000的API接口
即:
DCA1000EVM_CLI_Control.exe
通过该exe程序 可以将json脚本传递给DCA1000 并自动进行雷达配置
通过文档可以查看使用方法:
TI_DCA1000EVM_CLI_Software_UserGuide.pdf
在此程序目录下通过cmd调用:
DCA1000EVM_CLI_Control.exe fpga cf.json
DCA1000EVM_CLI_Control.exe record cf.json
DCA1000EVM_CLI_Control.exe start_record cf.json
即打开cf.json
的fpga配置文件
打开cf.json
的记录配置文件
打开cf.json
的开启配置文件
整体的配置文件为cf.json
原路径为C:\ti\mmwave_studio_02_01_01_00\mmWaveStudio\PostProc
可以根据自己的需求来进行修改操作
DCA1000开始监听数据 如果此时雷达开始工作 DCA1000便会开始采集数据
雷达板的控制
随便烧录一个雷达的out_of_box工程进去 然后通过串口或者官方的开箱测试GUI使用CLI指令将雷达的配置发送
并且确保开启sensorStart之前 DCA1000开启了数据监听
实时采集
通过shell脚本等方法进行自动化数据采集后 即可通过上一章节中的任一方法进行实时采集
附录:结构框架
雷达基本原理叙述
雷达工作原理是上电-发送chirps-帧结束-处理-上电循环
一个Frame,首先是信号发送,比如96个chirp就顺次发出去,然后接收回来,混频滤波,ADC采样,这些都是射频模块的东西。射频完成之后,FFT,CFAR,DOA这些就是信号处理的东西。然后输出给那个结构体,就是当前帧获得的点云了。
在射频发送阶段 一个frame发送若干个chirp 也就是上图左上角
第一个绿色点为frame start 第二个绿色点为frame end
其中发送若干chirps(小三角形)
chirps的个数称为numLoops(代码中 rlFrameCfg_t
结构体)
在mmwave studio上位机中 则称为 no of chirp loops
frame end 到 周期结束的时间为计算时间 称为inter frame period
frame start到循环结束的时间称为framePeriodicity(代码中 rlFrameCfg_t
结构体)
在mmwave studio上位机中 则称为 Periodicity
如下图frame配置部分
在inter frame Periodicity时间内(比如这里整个周期是55ms)
就是用于计算和处理的时间 一定比55ms要小
如果chirps很多的话 那么计算时间就会减小
如果是处理点云数据 则只需要每一帧计算一次点云即可
计算出当前帧的xyz坐标和速度 以及保存时间戳
雷达天线排列位置
在工业雷达包:
C:\ti\mmwave_industrial_toolbox_4_12_0\antennas\ant_rad_patterns
路径下 有各个EVM开发板的天线排列说明
同样的 EVM手册中也有
如IWR6843AOPEVM:
其天线的间距等等位于数据手册:
芯片框架
IWR6843AOP可以分成三个主要部分及多个外设
BSS:雷达前端部分
MSS:cortex-rf4内核 主要用于控制
DSS: DSP C674内核 主要用于信号处理
外设:UART GPIO DPM HWA等
其中 大部分外设可以被MSS或DSS调用
另外 雷达前端BSS部分在SDK里由MMWave API调用
代码框架上 可以分成两个代码 MSS和DSS 两个代码同时运行 通过某些外设进行同步 协同运作
但也可以只跑一个内核 在仅MSS模式下 依旧可以调用某些用于信号处理的外设 demo代码就是如此
如下图为demo代码流程
Demo工程功能
IWR6843AOP的开箱工程是根据IWR6843AOPEVM开发板来的
该工程可以将IWR6843AOP的两个串口利用起来 实现的功能主要是两个方面:
通过115200波特率的串口配置参数 建立握手协议
通过115200*8的串口输出雷达数据
此工程需要匹配TI官方的上位机:mmWave_Demo_Visualizer_3.6.0
来使用
该上位机可以在连接串口后自动化操作 并且对雷达数据可视化
关于雷达参数配置 则在SDK的mmw\profiles
目录下
言简意赅 可以直接更改该目录下的文件参数来达到配置雷达参数的目的
但这种方法不利于直接更改 每次用上位机运行后的参数是固定的(上位机运行需要SDK环境) 所以也可以在代码中写死 本文探讨的就是这个方向
CCS工程导入
首先 在工业雷达包目录下找到该工程设置
C:\ti\mmwave_industrial_toolbox_4_12_0\labs\Out_Of_Box_Demo\src\xwr6843AOP
使用CCS的import project功能导入工程后 即可完成环境搭建
这里用到的SDK最新版为3.6版本
工程叙述
以下来自官方文档 可以直接跳过
Software Tasks
The demo consists of the following (SYSBIOS) tasks:
MmwDemo_initTask. This task is created/launched by main and is a one-time active task whose main functionality is to initialize drivers (<driver>_init), MMWave module (MMWave_init), DPM module (DPM_init), open UART and data path related drivers (EDMA, HWA), and create/launch the following tasks (the CLI_task is launched indirectly by calling CLI_open).
CLI_task. This command line interface task provides a simplified 'shell' interface which allows the configuration of the BSS via the mmWave interface (MMWave_config). It parses input CLI configuration commands like chirp profile and GUI configuration. When sensor start CLI command is parsed, all actions related to starting sensor and starting the processing the data path are taken. When sensor stop CLI command is parsed, all actions related to stopping the sensor and stopping the processing of the data path are taken
MmwDemo_mmWaveCtrlTask. This task is used to provide an execution context for the mmWave control, it calls in an endless loop the MMWave_execute API.
MmwDemo_DPC_ObjectDetection_dpmTask. This task is used to provide an execution context for DPM (Data Path Manager) execution, it calls in an endless loop the DPM_execute API. In this context, all of the registered object detection DPC (Data Path Chain) APIs like configuration, control and execute will take place. In this task. When the DPC's execute API produces the detected objects and other results, they are transmitted out of the UART port for display using the visualizer.
Data Path
Top Level Data Path Processing Chain
Top Level Data Path Timing
The data path processing consists of taking ADC samples as input and producing detected objects (point-cloud and other information) to be shipped out of UART port to the PC. The algorithm processing is realized using the DPM registered Object Detection DPC. The details of the processing in DPC can be seen from the following doxygen documentation:
ti/datapath/dpc/objectdetection/objdethwa/docs/doxygen/html/index.html
Output information sent to host
Output packets with the detection information are sent out every frame through the UART. Each packet consists of the header MmwDemo_output_message_header_t and the number of TLV items containing various data information with types enumerated in MmwDemo_output_message_type_e. The numerical values of the types can be found in mmw_output.h. Each TLV item consists of type, length (MmwDemo_output_message_tl_t) and payload information. The structure of the output packet is illustrated in the following figure. Since the length of the packet depends on the number of detected objects it can vary from frame to frame. The end of the packet is padded so that the total packet length is always multiple of 32 Bytes.
Output packet structure sent to UART
The following subsections describe the structure of each TLV.
List of detected objects
Type: (MMWDEMO_OUTPUT_MSG_DETECTED_POINTS)Length: (Number of detected objects) x (size of DPIF_PointCloudCartesian_t)Value: Array of detected objects. The information of each detected object is as per the structure DPIF_PointCloudCartesian_t. When the number of detected objects is zero, this TLV item is not sent. The maximum number of objects that can be detected in a sub-frame/frame is DPC_OBJDET_MAX_NUM_OBJECTS.The orientation of x,y and z axes relative to the sensor is as per the following figure. (Note: The antenna arrangement in the figure is shown for standard EVM (see gAntDef_default) as an example but the figure is applicable for any antenna arrangement.)
Coordinate Geometry
The whole detected objects TLV structure is illustrated in figure below.
Detected objects TLV
Range profile
Type: (MMWDEMO_OUTPUT_MSG_RANGE_PROFILE)Length: (Range FFT size) x (size of uint16_t)Value: Array of profile points at 0th Doppler (stationary objects). The points represent the sum of log2 magnitudes of received antennas expressed in Q9 format.Noise floor profile
Type: (MMWDEMO_OUTPUT_MSG_NOISE_PROFILE)Length: (Range FFT size) x (size of uint16_t)Value: This is the same format as range profile but the profile is at the maximum Doppler bin (maximum speed objects). In general for stationary scene, there would be no objects or clutter at maximum speed so the range profile at such speed represents the receiver noise floor.
Azimuth static heatmap
Type: (MMWDEMO_OUTPUT_MSG_AZIMUT_STATIC_HEAT_MAP)Length: (Range FFT size) x (Number of "azimuth" virtual antennas) (size of cmplx16ImRe_t_)Value: Array DPU_AoAProcHWA_HW_Resources::azimuthStaticHeatMap. The antenna data are complex symbols, with imaginary first and real second in the following order:
Imag(ant 0, range 0), Real(ant 0, range 0),...,Imag(ant N-1, range 0),Real(ant N-1, range 0)...Imag(ant 0, range R-1), Real(ant 0, range R-1),...,Imag(ant N-1, range R-1),Real(ant N-1, range R-1)
Note that the number of virtual antennas is equal to the number of “azimuth” virtual antennas. The antenna symbols are arranged in the order as they occur at the input to azimuth FFT. Based on this data the static azimuth heat map could be constructed by the GUI running on the host.
Azimuth/Elevation static heatmap
Type: (MMWDEMO_OUTPUT_MSG_AZIMUT_ELEVATION_STATIC_HEAT_MAP)Length: (Range FFT size) x (Number of all virtual antennas) (size of cmplx16ImRe_t_)Value: Array DPU_AoAProcHWA_HW_Resources::azimuthStaticHeatMap. The antenna data are complex symbols, with imaginary first and real second in the following order:
Imag(ant 0, range 0), Real(ant 0, range 0),...,Imag(ant N-1, range 0),Real(ant N-1, range 0)...Imag(ant 0, range R-1), Real(ant 0, range R-1),...,Imag(ant N-1, range R-1),Real(ant N-1, range R-1)
Note that the number of virtual antennas is equal to the total number of active virtual antennas. The antenna symbols are arranged in the order as they occur in the radar cube matrix. This TLV is sent by AOP version of MMW demo, that uses AOA2D DPU. Based on this data the static azimuth or elevation heat map could be constructed by the GUI running on the host.
Range/Doppler heatmap
Type: (MMWDEMO_OUTPUT_MSG_RANGE_DOPPLER_HEAT_MAP)Length: (Range FFT size) x (Doppler FFT size) (size of uint16_t)Value: Detection matrix DPIF_DetMatrix::data. The order is :
X(range bin 0, Doppler bin 0),...,X(range bin 0, Doppler bin D-1),...X(range bin R-1, Doppler bin 0),...,X(range bin R-1, Doppler bin D-1)
Stats information
Type: (MMWDEMO_OUTPUT_MSG_STATS )Length: (size of MmwDemo_output_message_stats_t)Value: Timing information as per MmwDemo_output_message_stats_t. See timing diagram below related to the stats.
Processing timing
Note:The MmwDemo_output_message_stats_t::interChirpProcessingMargin is not computed (it is always set to 0). This is because there is no CPU involvement in the 1D processing (only HWA and EDMA are involved), and it is not possible to know how much margin is there in chirp processing without CPU being notified at every chirp when processing begins (chirp event) and when the HWA-EDMA computation ends. The CPU is intentionally kept free during 1D processing because a real application may use this time for doing some post-processing algorithm execution.
While the MmwDemo_output_message_stats_t::interFrameProcessingTime reported will be of the current sub-frame/frame, the MmwDemo_output_message_stats_t::interFrameProcessingMargin and MmwDemo_output_message_stats_t::transmitOutputTime will be of the previous sub-frame (of the same MmwDemo_output_message_header_t::subFrameNumber as that of the current sub-frame) or of the previous frame.
The MmwDemo_output_message_stats_t::interFrameProcessingMargin excludes the UART transmission time (available as MmwDemo_output_message_stats_t::transmitOutputTime). This is done intentionally to inform the user of a genuine inter-frame processing margin without being influenced by a slow transport like UART, this transport time can be significantly longer for example when streaming out debug information like heat maps. Also, in a real product deployment, higher speed interfaces (e.g LVDS) are likely to be used instead of UART. User can calculate the margin that includes transport overhead (say to determine the max frame rate that a particular demo configuration will allow) using the stats because they also contain the UART transmission time.
The CLI command “guMonitor” specifies which TLV element will be sent out within the output packet. The arguments of the CLI command are stored in the structure MmwDemo_GuiMonSel_t.
Side information of detected objects
Type: (MMWDEMO_OUTPUT_MSG_DETECTED_POINTS_SIDE_INFO)Length: (Number of detected objects) x (size of DPIF_PointCloudSideInfo_t)Value: Array of detected objects side information. The side information of each detected object is as per the structure DPIF_PointCloudSideInfo_t). When the number of detected objects is zero, this TLV item is not sent.
Temperature Stats
Type: (MMWDEMO_OUTPUT_MSG_TEMPERATURE_STATS)Length: (size of MmwDemo_temperatureStats_t)Value: Structure of detailed temperature report as obtained from Radar front end. MmwDemo_temperatureStats_t::tempReportValid is set to return value of rlRfGetTemperatureReport. If MmwDemo_temperatureStats_t::tempReportValid is 0, values in MmwDemo_temperatureStats_t::temperatureReport are valid else they should be ignored. This TLV is sent along with Stats TLV described in Stats information
Range Bias and Rx Channel Gain/Phase Measurement and Compensation
Because of imperfections in antenna layouts on the board, RF delays in SOC, etc, there is need to calibrate the sensor to compensate for bias in the range estimation and receive channel gain and phase imperfections. The following figure illustrates the calibration procedure.
Calibration procedure ladder diagram
The calibration procedure includes the following steps:Set a strong target like corner reflector at the distance of X meter (X less than 50 cm is not recommended) at boresight.
Set the following command in the configuration profile in .../profiles/profile_calibration.cfg, to reflect the position X as follows: where D (in meters) is the distance of window around X where the peak will be searched. The purpose of the search window is to allow the test environment from not being overly constrained say because it may not be possible to clear it of all reflectors that may be stronger than the one used for calibration. The window size is recommended to be at least the distance equivalent of a few range bins. One range bin for the calibration profile (profile_calibration.cfg) is about 5 cm. The first argument "1" is to enable the measurement. The stated configuration profile (.cfg) must be used otherwise the calibration may not work as expected (this profile ensures all transmit and receive antennas are engaged among other things needed for calibration).measureRangeBiasAndRxChanPhase 1 X D
Start the sensor with the configuration file.
In the configuration file, the measurement is enabled because of which the DPC will be configured to perform the measurement and generate the measurement result (DPU_AoAProc_compRxChannelBiasCfg_t) in its result structure (DPC_ObjectDetection_ExecuteResult_t::compRxChanBiasMeasurement), the measurement results are written out on the CLI port (MmwDemo_measurementResultOutput) in the format below: For details of how DPC performs the measurement, see the DPC documentation.compRangeBiasAndRxChanPhase <rangeBias> <Re(0,0)> <Im(0,0)> <Re(0,1)> <Im(0,1)> ... <Re(0,R-1)> <Im(0,R-1)> <Re(1,0)> <Im(1,0)> ... <Re(T-1,R-1)> <Im(T-1,R-1)>
The command printed out on the CLI now can be copied and pasted in any configuration file for correction purposes. This configuration will be passed to the DPC for the purpose of applying compensation during angle computation, the details of this can be seen in the DPC documentation. If compensation is not desired, the following command should be given (depending on the EVM and antenna arrangement) Above sets the range bias to 0 and the phase coefficients to unity so that there is no correction. Note the two commands must always be given in any configuration file, typically the measure commmand will be disabled when the correction command is the desired one.For ISK EVM:compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 For AOP EVMcompRangeBiasAndRxChanPhase 0.0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0 1 0 -1 0
Streaming data over LVDS
The LVDS streaming feature enables the streaming of HW data (a combination of ADC/CP/CQ data) and/or user specific SW data through LVDS interface. The streaming is done mostly by the CBUFF and EDMA peripherals with minimal CPU intervention. The streaming is configured through the MmwDemo_LvdsStreamCfg_t CLI command which allows control of HSI header, enable/disable of HW and SW data and data format choice for the HW data. The choices for data formats for HW data are:MMW_DEMO_LVDS_STREAM_CFG_DATAFMT_DISABLED
MMW_DEMO_LVDS_STREAM_CFG_DATAFMT_ADC
MMW_DEMO_LVDS_STREAM_CFG_DATAFMT_CP_ADC_CQ
In order to see the high-level data format details corresponding to the above data format configurations, refer to the corresponding slides in ti\drivers\cbuff\docs\CBUFF_Transfers.pptxWhen HW data LVDS streaming is enabled, the ADC/CP/CQ data is streamed per chirp on every chirp event. When SW data streaming is enabled, it is streamed during inter-frame period after the list of detected objects for that frame is computed. The SW data streamed every frame/sub-frame is composed of the following in time:HSI header (HSIHeader_t): refer to HSI module for details.
User data header: MmwDemo_LVDSUserDataHeader
User data payloads:
Point-cloud information as a list : DPIF_PointCloudCartesian_t x number of detected objects
Point-cloud side information as a list : DPIF_PointCloudSideInfo_t x number of detected objects
The format of the SW data streamed is shown in the following figure:
LVDS SW Data format
Note:Only single-chirp formats are allowed, multi-chirp is not supported.
When number of objects detected in frame/sub-frame is 0, there is no transmission beyond the user data header.
For HW data, the inter-chirp duration should be sufficient to stream out the desired amount of data. For example, if the HW data-format is ADC and HSI header is enabled, then the total amount of data generated per chirp is:
(numAdcSamples * numRxChannels * 4 (size of complex sample) + 52 [sizeof(HSIDataCardHeader_t) + sizeof(HSISDKHeader_t)] ) rounded up to multiples of 256 [=sizeof(HSIHeader_t)] bytes.
The chirp time Tc in us = idle time + ramp end time in the profile configuration. For n-lane LVDS with each lane at a maximum of B Mbps,
maximum number of bytes that can be send per chirp = Tc * n * B / 8 which should be greater than the total amount of data generated per chirp i.e
Tc * n * B / 8 >= round-up(numAdcSamples * numRxChannels * 4 + 52, 256).
E.g if n = 2, B = 600 Mbps, idle time = 7 us, ramp end time = 44 us, numAdcSamples = 512, numRxChannels = 4, then 7650 >= 8448 is violated so this configuration will not work. If the idle-time is doubled in the above example, then we have 8700 > 8448, so this configuration will work.
For SW data, the number of bytes to transmit each sub-frame/frame is:
52 [sizeof(HSIDataCardHeader_t) + sizeof(HSISDKHeader_t)] + sizeof(MmwDemo_LVDSUserDataHeader_t) [=8] +
number of detected objects (Nd) * { sizeof(DPIF_PointCloudCartesian_t) [=16] + sizeof(DPIF_PointCloudSideInfo_t) [=4] } rounded up to multiples of 256 [=sizeof(HSIHeader_t)] bytes.
or X = round-up(60 + Nd * 20, 256). So the time to transmit this data will be
X * 8 / (n*B) us. The maximum number of objects (Ndmax) that can be detected is defined in the DPC (DPC_OBJDET_MAX_NUM_OBJECTS). So if Ndmax = 500, then time to transmit SW data is 68 us. Because we parallelize this transmission with the much slower UART transmission, and because UART transmission is also sending at least the same amount of information as the LVDS, the LVDS transmission time will not add any burdens on the processing budget beyond the overhead of reconfiguring and activating the CBUFF session (this overhead is likely bigger than the time to transmit).
The total amount of data to be transmitted in a HW or SW packet must be greater than the minimum required by CBUFF, which is 64 bytes or 32 CBUFF Units (this is the definition CBUFF_MIN_TRANSFER_SIZE_CBUFF_UNITS in the CBUFF driver implementation). If this threshold condition is violated, the CBUFF driver will return an error during configuration and the demo will generate a fatal exception as a result. When HSI header is enabled, the total transfer size is ensured to be at least 256 bytes, which satisfies the minimum. If HSI header is disabled, for the HW session, this means that numAdcSamples * numRxChannels * 4 >= 64. Although mmwavelink allows minimum number of ADC samples to be 2, the demo is supported for numAdcSamples >= 64. So HSI header is not required to be enabled for HW only case. But if SW session is enabled, without the HSI header, the bytes in each packet will be 8 + Nd * 20. So for frames/sub-frames where Nd < 3, the demo will generate exception. Therefore HSI header must be enabled if SW is enabled, this is checked in the CLI command validation.
Implementation Notes
The LVDS implementation is mostly present in mmw_lvds_stream.h and mmw_lvds_stream.c with calls in mss_main.c. Additionally HSI clock initialization is done at first time sensor start using MmwDemo_mssSetHsiClk.
EDMA channel resources for CBUFF/LVDS are in the global resource file (mmw_res.h, see Hardware Resource Allocation) along with other EDMA resource allocation. The user data header and two user payloads are configured as three user buffers in the CBUFF driver. Hence SW allocation for EDMA provides for three sets of EDMA resources as seen in the SW part (swSessionEDMAChannelTable[.]) of MmwDemo_LVDSStream_EDMAInit. The maximum number of HW EDMA resources are needed for the data-format MMW_DEMO_LVDS_STREAM_CFG_DATAFMT_CP_ADC_CQ, which as seen in the corresponding slide in ti\drivers\cbuff\docs\CBUFF_Transfers.pptx is 12 channels (+ shadows) including the 1st special CBUFF EDMA event channel which CBUFF IP generates to the EDMA, hence the HW part (hwwSessionEDMAChannelTable[.]) of MmwDemo_LVDSStream_EDMAInit has 11 table entries.
Although the CBUFF driver is configured for two sessions (hw and sw), at any time only one can be active. So depending on the LVDS CLI configuration and whether advanced frame or not, there is logic to activate/deactivate HW and SW sessions as necessary.
The CBUFF session (HW/SW) configure-create and delete depends on whether or not re-configuration is required after the first time configuration.
For HW session, re-configuration is done during sub-frame switching to re-configure for the next sub-frame but when there is no advanced frame (number of sub-frames = 1), the HW configuration does not need to change so HW session does not need to be re-created.
For SW session, even though the user buffer start addresses and sizes of headers remains same, the number of detected objects which determines the sizes of some user buffers changes from one sub-frame/frame to another sub-frame/frame. Therefore SW session needs to be recreated every sub-frame/frame.
User may modify the application software to transmit different information than point-cloud in the SW data e.g radar cube data (output of range DPU). However the CBUFF also has a maximum link list entry size limit of 0x3FFF CBUFF units or 32766 bytes. This means it is the limit for each user buffer entry [there are maximum of 3 entries -1st used for user data header, 2nd for point-cloud and 3rd for point-cloud side information]. During session creation, if this limit is exceeded, the CBUFF will return an error (and demo will in turn generate an exception). A single physical buffer of say size 50000 bytes may be split across two user buffers by providing one user buffer with (address, size) = (start address, 25000) and 2nd user buffer with (address, size) = (start address + 25000, 25000), beyond this two (or three if user data header is also replaced) limit, the user will need to create and activate (and wait for completion) the SW session multiple times to accomplish the transmission.
The following figure shows a timing diagram for the LVDS streaming (the figure is not to scale as actual durations will vary based on configuration).
How to bypass CLI
Re-implement the file mmw_cli.c as follows:MmwDemo_CLIInit should just create a task with input taskPriority. Lets say the task is called "MmwDemo_sensorConfig_task".
All other functions are not needed
Implement the MmwDemo_sensorConfig_task as follows:
Fill gMmwMCB.cfg.openCfg
Fill gMmwMCB.cfg.ctrlCfg
Add profiles and chirps using MMWave_addProfile and MMWave_addChirp functions
Call MmwDemo_CfgUpdate for every offset in Offsets for storing CLI configuration (MMWDEMO_xxx_OFFSET in mmw.h)
Fill gMmwMCB.dataPathObj.objDetCommonCfg.preStartCommonCfg
Call MmwDemo_openSensor
Call MmwDemo_startSensor (One can use helper function MmwDemo_isAllCfgInPendingState to know if all dynamic config was provided)
Hardware Resource Allocation
The Object Detection DPC needs to configure the DPUs hardware resources (HWA, EDMA). Even though the hardware resources currently are only required to be allocated for this one and only DPC in the system, the resource partitioning is shown to be in the ownership of the demo. This is to illustrate the general case of resource allocation across more than one DPCs and/or demo's own processing that is post-DPC processing. This partitioning can be seen in the mmw_res.h file. This file is passed as a compiler command line define
"--define=APP_RESOURCE_FILE="<ti/demo/xwr64xx/mmw/mmw_res.h>"
in mmw.mak when building the DPC sources as part of building the demo application and is referred in object detection DPC sources where needed as
#include APP_RESOURCE_FILE
相关文章:
【TI毫米波雷达】DCA1000不使用mmWave Studio的数据采集方法,以及自动化实时数据采集
【TI毫米波雷达】DCA1000不使用mmWave Studio的数据采集方法,以及自动化实时数据采集 mmWave Studio提供的功能完全够用了 不用去纠结用DCA1000低延迟、无GUI传数据 速度最快又保证算力无非就是就是Linux板自己写驱动做串口和UDP 做雷达产品应用也不会采用DCA1000的…...
20250110_ PyTorch中的张量操作
文章目录 前言1、torch.cat 函数2、索引、维度扩展和张量的广播3、切片操作3.1、 encoded_first_node3.2、probs 4、长难代码分析4.1、selected4.1.1、multinomial(1)工作原理: 总结 前言 1、torch.cat 函数 torch.cat 函数将两个张量拼接起来,具体地是…...
【ROS2】☆ launch之Python
☆重点 ROS1和ROS2其中一个很大区别之一就是launch的编写方式。在ROS1中采用xml格式编写launch,而ROS2保留了XML 格式launch,还另外引入了Python和YAML 编写方式。选择哪种编写取决于每位开发人员的爱好,但是ROS2官方推荐使用Python方式编写…...
unity rb.velocity和transform.position
rb.velocity和transform.position是用来控制物体位置的两种方式,前者通常用来控制人物的移动,它们的主要区别和适用场景如下 一,rb.velocity(控制刚体的速度) 它可以直接控制物体的速度,而不是物体的位置…...
景芯SOC设计实战
终身辅导、一对一辅导,手把手教您完成SoC全流程设计,从入门到进阶,带您掌握SoC芯片架构、算法、设计、验证、DFT、后端及低功耗全流程!直播视频不定期升级!让您快速超越同龄人! 景芯团队主打文档服务器实战…...
【WRF运行报错】总结WRF运行时报错及解决方案(持续更新)
目录 ./real.exe错误1:ERROR while reading namelist physics./wrf.exe错误1:FATAL CALLED FROM FILE: <stdin> LINE: 2419 Warning: too many input landuse types参考./real.exe 错误1:ERROR while reading namelist physics 执行./real.exe时,报错如下: taski…...
Mysql快速列出来所有列信息
文章目录 需求描述实现思路1、如何查表信息2、如何取字段描述信息3、如何将列信息一行展示4、拼接最终结果 需求描述 如何将MySQL数据库中指定表【tb_order】的所有字段都展示出来,以备注中的中文名为列名。 实现思路 最终展示效果,即拼接出可执行执行…...
spring boot发送邮箱,java实现邮箱发送(邮件带附件)3中方式【保姆级教程一,代码直接用】
文章目录 Java发送邮箱的方式1. 基于 Javax.mail 实现关于附件上传的方法 2. 基于 org.apache.commons.mail 实现常见报错 3. 基于 spring-boot-starter-mail 实现(推荐) 实际开发时需要实现邮件发送,本文章实现如何从零实现邮件发送。也就是…...
数据集-目标检测系列- 电话 测数据集 call_phone >> DataBall
数据集-目标检测系列- 电话 测数据集 call DataBall 助力快速掌握数据集的信息和使用方式,会员享有 百种数据集,持续增加中。 需要更多数据资源和技术解决方案,知识星球: “DataBall - X 数据球(free)” 贵在坚持! …...
Zstandard压缩算法
简介 Zstandard(缩写为zstd)是一种开源的无损数据压缩算法,主要设计目标是提供高比率的压缩和快速的解压缩速度。它由Yann Collet开发,并于2015年首次发布。 特点 高比率的压缩(通常比gzip更好)。快速的解压缩速度(通常比gzip更快)。支持流式解压缩。可以选择不同的压…...
npm i 报错
nodejs中 使用npm install命令时报错 npm err! file C: \user\admin\package.json_package.json 里缺少 description 和 repository 两个n字段。-CSDN博客...
【LeetCode】力扣刷题热题100道(26-30题)附源码 轮转数组 乘积 矩阵 螺旋矩阵 旋转图像(C++)
目录 1.轮转数组 2.除自身以外数组的乘积 3.矩阵置零 4.螺旋矩阵 5.旋转图像 1.轮转数组 给定一个整数数组 nums,将数组中的元素向右轮转 k 个位置,其中 k 是非负数。 class Solution { public:void rotate(vector<int>& nums, int k) …...
EFCore HasDefaultValueSql
今天小伙伴在代码中遇到了有关 HasDefaultValue 的疑问,这里整理澄清下... 在使用 Entity Framework Core (EFCore) 配置实体时,HasDefaultValue 方法会为数据库列设置一个默认值。该默认值的行为取决于以下条件: 1. 配置 HasDefaultValue 的…...
【数据结构】栈
目录 1.1 什么是栈 1.2 顺序栈 1.2.1 特性 1.3 链式栈 1.3.1 特性 总结: 1.1 什么是栈 栈是只能在一端进行插入和删除操作的线性表(又称为堆栈),进行插入和删除操作的一端称为栈顶,另一端称为栈底。 特点:栈是先进后出FILO…...
C++初阶—CC++内存管理
第一章:C/C内存分布 int globalVar 1; static int staticGlobalVar 1; void Test() {static int staticVar 1;int localVar 1;int num1[10] { 1, 2, 3, 4 };char char2[] "abcd";const char* pChar3 "abcd";int* ptr1 (int*)malloc(si…...
【机器视觉】OpenCV 图像基本变换
文章目录 介绍机器视觉的核心组成部分机器视觉的关键技术和趋势 4. 图像的基本变换4.1 图像的放大与缩小4.2 图像的翻转4.3 图像的旋转4.4 仿射变换之图像平移4.5 仿射变换之获取变换矩阵4.6 透视变换 介绍 机器视觉(Machine Vision)是一门跨学科的领域…...
【数据库】四、数据库管理与维护
文章目录 四、数据库管理与维护1 安全性管理2 事务概述3 并发控制4 备份与恢复管理 四、数据库管理与维护 1 安全性管理 安全性管理是指保护数据库,以避免非法用户进行窃取数据、篡改数据、删除数据和破坏数据库结构等操作 三个级别认证: 服务器级别…...
徐克版射雕唤醒热血武侠魂,共赴新春侠义之约
2025年大年初一,由徐克执导的古装武侠电影《射雕英雄传:侠之大者》将在影院拉开帷幕,在精彩纷呈的春节档电影中,“大IP”“大导演”“大场面”等标签让这部电影自定档起便备受关注,其精良的制作和传统中国武侠风的设定…...
设计模式(观察者模式)
设计模式(观察者模式) 第三章 设计模式之观察者模式 观察者模式介绍 观察者模式(Observer Design Pattern) 也被称为发布订阅模式 。模式定义:在对象之间定义一个一对多的依赖,当一个对象状态改变的时候…...
能量函数和能量守恒
在之前的文章1中讨论了与循环坐标相对应的动量守恒定律和动量矩守恒定律,本文将由拉格朗日方程中导出能量函数,进一步讨论能量守恒定律,并给出耗散系统的处理方法,这其中用到的一个关键数学定理是欧拉定理(描述如何将一…...
【pycharm发现找不到python打包工具,且无法下载】
发现找不到python打包工具,且无法下载 解决方法: 第一步:安装distutils,在CMD命令行输入: python -m ensurepip --default-pip第二步:检查和安装setuptools和wheel: python -m pip install --upgrade …...
使用 Maxwell 计算母线的电动势
三相短路事件的动力学 三相短路事件在电气系统中至关重要,因为三相之间的意外连接会导致电流大幅激增。如果管理不当,这些事件可能会造成损坏,因为它们会对电气元件(尤其是母线)产生极大的力和热效应。 短路时&#x…...
【Python】Python之Selenium基础教程+实战demo:提升你的测试+测试数据构造的效率!
这里写目录标题 什么是Selenium?Selenium基础用法详解环境搭建编写第一个Selenium脚本解析脚本常用的元素定位方法常用的WebDriver方法等待机制 Selenium高级技巧详解页面元素操作处理弹窗和警告框截图和日志记录多窗口和多标签页操作 一个实战的小demo步骤一&#…...
Ubuntu中批量重命名,rename
你可以使用下面的命令批量重命名这些文件,在文件名中插入 _1: 方式一 使用 mv 命令批量重命名 如果你已经在终端中,且当前目录包含这些文件,可以执行以下命令: mv ai.c ai_1.c mv ai.h ai_1.h mv ao.c ao_1.c mv a…...
LINUX 下 NODE 安装与配置
一、官网地址: (中文网)https://nodejs.cn/ (英文网)https://nodejs.org/en/ 二、下载安装包 2.1、下载地址:下载 | Node.js 中文网 https://nodejs.cn/download/ 2.2、使用 wget 命令下载到linux 服务器…...
Vue3 el-tree-v2渲染慢的问题
一、现象 使用el-tree-v2处理组织架构权限时,整个树的数据在8500条,勾选数据8200条,打开页面需要8~10秒,用户无法接受。 经调试,发现主要卡在树的渲染回显上(勾选数据少时,很快,勾选…...
【redis初阶】浅谈分布式系统
目录 一、常见概念 1.1 基本概念 2.2 评价指标(Metric) 二、架构演进 2.1 单机架构 2.2 应用数据分离架构 2.3 应用服务集群架构 2.4 读写分离/主从分离架构 2.5 引入缓存 ⸺ 冷热分离架构 2.6 数据库分库分表 2.7 业务拆分 ⸺ 引入微服务 redis学习&…...
模式识别与机器学习 | 十一章 概率图模型基础
隐马尔科夫模型(Hidden Markov Model,HMM) HMM是建模序列数据的图模型 1、第一个状态节点对应一个初始状态概率分布 2、状态转移矩阵A, 3、发射矩阵概率B 4、对特定的(x,y)的联合概率可以表示为 α递归计算——前向算法β递归…...
Linux基本指令(1)
一、ls指令 功能:对于目录,显示这个目录下的目录名以及文件名;对于文件,显示文件名 后面可接命令行选项配合使用,接选项时ls与选项以及选项与选项之间要有一个空格; 这里先学习了两个选项:-l…...
逐“绿”前行 企业综合能源管控低碳转型如何推进?
引言: 在“双碳”战略指引下,中国低碳节能各项工作有序推进,逐步建立起碳达峰碳中和“1N”的政策体系,重点领域、重点行业及各地区的碳达峰实施方案相继出台。能源对于促进经济社会发展、增进人民福祉至关重要。近年来࿰…...
springboot和vue配置https请求
项目场景: 代码发布到线上使用https请求需要配置ssl证书,前后端都需要修改。 问题描述 如图,我们在调用接口时报如下错误,这就是未配置ssl但是用https请求产生的问题。 解决方案: 前端:在vite.config.js文…...
数据库(2)--建表 表操作
1.建表 语法: create table if not exists 表名( 类型名 类型 comment ‘注释内容’, ... )设置字符集编码与排序规则; create table if not exists student( name char(10) comment 姓名, id bigint comment 学号 )character set utf8mb4 collate utf8mb4_0900_a…...
泷羽sec----学会并玩转powershell【基础1-2】
声明! 学习视频来自B站up主 泷羽sec 有兴趣的师傅可以关注一下,如涉及侵权马上删除文章,笔记只是方便各位师傅的学习和探讨,文章所提到的网站以及内容,只做学习交流,其他均与本人以及泷羽sec团队无关&#…...
2 逻辑符号
在文件和目录的判断或者其他情况中,可以组合使用多个条件。 逻辑与 (&&) 逻辑与运算符 && 用于在多个条件都为真时执行某个操作。 # 判断文件是否存在且可读 if [ -f "$file" ] && [ -r "$file" ]; thenecho "…...
Android基于回调的事件处理
Android 中的回调机制:基于回调的事件处理详解 在 Android 开发中,回调(Callback)是一种常见的事件处理机制,主要用于异步操作和事件通知。与传统的基于监听器的事件处理相比,回调机制更加灵活、通用&…...
跨界融合:人工智能与区块链如何重新定义数据安全?
引言:数据安全的挑战与现状 在信息化驱动的数字化时代,数据已成为企业和个人最重要的资产之一。然而,随着网络技术的逐步优化和数据量的爆发式增长,数据安全问题也愈变突出。 数据安全现状:– 数据泄露驱动相关事件驱…...
qml SpringAnimation详解
1. 概述 SpringAnimation 是 Qt Quick 中用于模拟弹簧效果的动画类。它通过模拟物体在弹簧力作用下的反应,产生一种振荡的动画效果,常用于模拟具有自然回弹、弹性和振动的动态行为。这种动画效果在 UI 中广泛应用,特别是在拖动、拉伸、回弹等…...
Qt 5.14.2 学习记录 —— 칠 QWidget 常用控件(2)
文章目录 1、Window Frame2、windowTitle3、windowIcon4、qrc机制5、windowOpacity 1、Window Frame 在运行Qt程序后,除了用户做的界面,最上面还有一个框,这就是window frame框。对于界面的元素,它们的原点是Qt界面的左上角或win…...
Windows service运行Django项目
系统:Windows Service 软件:nssm,nginx 配置Django项目 1、把Django项目的静态文件整理到staticfiles文件夹中 注:settings中的设置 STATIC_URL /static/ STATIC_ROOT os.path.join(BASE_DIR, staticfiles/) STATICFILES_DI…...
notebook主目录及pip镜像源修改
目录 一、notebook主目录修改二、pip镜像源修改 一、notebook主目录修改 在使用Jupyter Notebook进行数据分析时,生成的.ipynb文件默认会保存在Jupyter的主目录中。通常情况下,系统会将Jupyter的主目录设置为系统的文档目录,而文档目录通常位…...
Linux(Centos 7.6)命令详解:touch
1.命令作用 如果文件不存在将创建一个空文件;修改文件修改访问时间。 2.命令语法 Usage: touch [OPTION]... FILE... 3.参数详解 Usage: -a,access 只修改访问时间-c,不创建任何文件-d,--dateSTRING 解析STRING并使用它替代…...
ubuntu 下生成 core dump
在Ubuntu下,发现程序崩溃后不生成core dump文件, 即使设置了ulimit -c unlimited后仍然无效。 1.ulimit -c unlimited 输出的的含义是核心转储文件的大小限制,单位是blocks,默认是0,表示不生成core dump文件。 2. 重设core_pattern ulimit -c unlimited后,核心转储文件…...
大数据智能选课系统
1.产品介绍 产品名称:大数据智能选课系统 一、产品概述 随着信息技术的快速发展,大数据技术在教育领域的应用越来越广泛。针对当前高校选课过程中的繁琐操作、资源分配不均等问题,我们研发了一款基于大数据智能分析的选课系统。本系统旨在…...
HTTP-响应协议
HTTP的响应过程? 浏览器请求数据--》web服务器过程:请求过程 web服务器将响应数据-》到浏览器:响应过程 响应数据有哪些内容? 1.和请求数据类似。 2. 响应体中存储着web服务器返回给浏览器的响应数据。并且注意响应头和响应体之间…...
[离线数仓] 总结三、Hive数仓DIM层开发
5.9 数仓开发之DIM层 DIM层设计要点: (1)DIM层的设计依据是维度建模理论,该层存储维度模型的维度表。 (2)DIM层的数据存储格式为orc列式存储+snappy压缩。 (3)DIM层表名的命名规范为dim_表名_全量表或者拉链表标识(full/zip)。 -- 数仓开发之DIM层 -- DIM层设计要点:…...
搭建RK3588开发板Qt交叉编译环境
一、开发环境 在虚拟机里安装Ubuntu20.04Qt5.14.2交叉编译器gcc-linaro-7.5.0 二、相关资料下载 在虚拟机里安装Ubuntu20.04 Ubuntu20.04镜像下载(https://mirrors.tuna.tsinghua.edu.cn/ubuntu-releases/)安装Qt5.14.2 下载安装包和源码 安装包(http…...
【Rust自学】11.3. 自定义错误信息
喜欢的话别忘了点赞、收藏加关注哦,对接下来的教程有兴趣的可以关注专栏。谢谢喵!(・ω・) 11.3.1. 添加错误信息 在 11.2. 断言(Assert) 中我们学习了assert!、assert_eq!和assert_ne!这三个宏,而这篇文章讲的就是它…...
ROS核心概念解析:从Node到Master,再到roslaunch的全面指南
Node 在ROS中,最小的进程单元就是节点(node)。一个软件包里可以有多个可执行文件,可执行文件在运行之后就成了一个进程(process),这个进程在ROS中就叫做节点。 从程序角度来说,node就是一个可执行文件&…...
Autodl安装tensorflow2.10.0记录
首先租用新实例(我选的是3080*2卡),由于基础镜像中没有2.10.0版本,选miniconda3的基础环境 创建虚拟环境:conda create --name xxx python3.8(环境名)激活虚拟环境:conda activate x…...
Linux第一课:c语言 学习记录day06
四、数组 冒泡排序 两两比较,第 j 个和 j1 个比较 int a[5] {5, 4, 3, 2, 1}; 第一轮:i 0 n:n个数,比较 n-1-i 次 4 5 3 2 1 // 第一次比较 j 0 4 3 5 2 1 // 第二次比较 j 1 4 3 2 5 1 // 第三次比较 j 2 4 3 2 1 5 // …...