【音视频】FFmpeg过滤器框架分析
ffmpeg的filter⽤起来是和Gstreamer的plugin是⼀样的概念,通过avfilter_link,将各个创建好的filter按⾃⼰想要的次序链接到⼀起,然后avfilter_graph_config之后,就可以正常使⽤。
⽐较常⽤的滤镜有:scale、trim、overlay、rotate、movie、yadif。scale 滤镜⽤于缩放,trim 滤镜⽤于帧级剪切,overlay 滤镜⽤于视频叠加,rotate 滤镜实现旋转,movie 滤镜可以加载第三⽅的视频,yadif 滤镜可以去隔⾏。
1 主要结构体和API介绍
AVFilterGraph-对filters系统的整体管理
重点
struct AVFilterGraph
{
AVFilterContext **filters;
unsigned nb_filters;
}
完整结构体
// 对filters系统的整体管理
typedef struct AVFilterGraph {const AVClass *av_class;AVFilterContext **filters;unsigned nb_filters;char *scale_sws_opts; ///< sws options to use for the auto-inser
ted scale filters
#if FF_API_LAVR_OPTSattribute_deprecated char *resample_lavr_opts; ///< libavresam
ple options to use for the auto-inserted resample filters
#endif/*** Type of multithreading allowed for filters in this graph. A c
ombination* of AVFILTER_THREAD_* flags.** May be set by the caller at any point, the setting will apply
to all* filters initialized after that. The default is allowing every
thing.** When a filter in this graph is initialized, this field is com
bined using* bit AND with AVFilterContext.thread_type to get the final mas
k used for* determining allowed threading types. I.e. a threading type ne
eds to be* set in both to be allowed.*/int thread_type;/*** Maximum number of threads used by filters in this graph. May
be set by* the caller before adding any filters to the filtergraph. Zero
(the* default) means that the number of threads is determined autom
atically.*/int nb_threads;/*** Opaque object for libavfilter internal use.*/AVFilterGraphInternal *internal;/*** Opaque user data. May be set by the caller to an arbitrary va
lue, e.g. to* be used from callbacks like @ref AVFilterGraph.execute.* Libavfilter will not touch this field in any way.*/void *opaque;/*** This callback may be set by the caller immediately after allo
cating the* graph and before adding any filters to it, to provide a custo
m* multithreading implementation.** If set, filters with slice threading capability will call thi
s callback* to execute multiple jobs in parallel.** If this field is left unset, libavfilter will use its interna
l* implementation, which may or may not be multithreaded dependi
ng on the* platform and build options.*/avfilter_execute_func *execute;char *aresample_swr_opts; ///< swr options to use for the auto-i
nserted aresample filters, Access ONLY through AVOptions/*** Private fields** The following fields are for internal use only.* Their type, offset, number and semantic can change without no
tice.*/AVFilterLink **sink_links;int sink_links_count;unsigned disable_auto_convert;
} AVFilterGraph;
AVFilter-定义filter本身的能⼒
重点
const char *name; // overlay
const AVFilterPad *inputs;
const AVFilterPad *outputs;
比如
AVFilter ff_vf_overlay = {.name = "overlay",.description = NULL_IF_CONFIG_SMALL("Overlay a video source on top of the input."),.preinit = overlay_framesync_preinit,.init = init,.uninit = uninit,.priv_size = sizeof(OverlayContext),.priv_class = &overlay_class,.query_formats = query_formats,.activate = activate,.process_command = process_command,.inputs = avfilter_vf_overlay_inputs,.outputs = avfilter_vf_overlay_outputs,.flags = AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL |AVFILTER_FLAG_SLICE_THREADS,
};
定义filter本身的能⼒,拥有的pads,回调函数接⼝定义
*/
typedef struct AVFilter {/*** Filter name. Must be non-NULL and unique among filters.*/const char *name;/*** A description of the filter. May be NULL.** You should use the NULL_IF_CONFIG_SMALL() macro to define it.*/const char *description;/*** List of inputs, terminated by a zeroed element.** NULL if there are no (static) inputs. Instances of filters with* AVFILTER_FLAG_DYNAMIC_INPUTS set may have more inputs than present in* this list.*/const AVFilterPad *inputs;/*** List of outputs, terminated by a zeroed element.** NULL if there are no (static) outputs. Instances of filters with* AVFILTER_FLAG_DYNAMIC_OUTPUTS set may have more outputs than present in* this list.*/const AVFilterPad *outputs;/*** A class for the private data, used to declare filter private AVOptions.* This field is NULL for filters that do not declare any options.** If this field is non-NULL, the first member of the filter private data* must be a pointer to AVClass, which will be set by libavfilter generic* code to this class.*/const AVClass *priv_class;/*** A combination of AVFILTER_FLAG_**/int flags;/**************************************************************** All fields below this line are not part of the public API. They* may not be used outside of libavfilter and can be changed and* removed at will.* New public fields should be added right above.***************************************************************//*** Filter pre-initialization function** This callback will be called immediately after the filter context is* allocated, to allow allocating and initing sub-objects.** If this callback is not NULL, the uninit callback will be called on* allocation failure.** @return 0 on success,* AVERROR code on failure (but the code will be* dropped and treated as ENOMEM by the calling code)*/int (*preinit)(AVFilterContext *ctx);/*** Filter initialization function.** This callback will be called only once during the filter lifetime, after* all the options have been set, but before links between filters are* established and format negotiation is done.** Basic filter initialization should be done here. Filters with dynamic* inputs and/or outputs should create those inputs/outputs here based on* provided options. No more changes to this filter's inputs/outputs can be* done after this callback.** This callback must not assume that the filter links exist or frame* parameters are known.** @ref AVFilter.uninit "uninit" is guaranteed to be called even if* initialization fails, so this callback does not have to clean up on* failure.** @return 0 on success, a negative AVERROR on failure*/int (*init)(AVFilterContext *ctx);/*** Should be set instead of @ref AVFilter.init "init" by the filters that* want to pass a dictionary of AVOptions to nested contexts that are* allocated during init.** On return, the options dict should be freed and replaced with one that* contains all the options which could not be processed by this filter (or* with NULL if all the options were processed).** Otherwise the semantics is the same as for @ref AVFilter.init "init".*/int (*init_dict)(AVFilterContext *ctx, AVDictionary **options);/*** Filter uninitialization function.** Called only once right before the filter is freed. Should deallocate any* memory held by the filter, release any buffer references, etc. It does* not need to deallocate the AVFilterContext.priv memory itself.** This callback may be called even if @ref AVFilter.init "init" was not* called or failed, so it must be prepared to handle such a situation.*/void (*uninit)(AVFilterContext *ctx);/*** Query formats supported by the filter on its inputs and outputs.** This callback is called after the filter is initialized (so the inputs* and outputs are fixed), shortly before the format negotiation. This* callback may be called more than once.** This callback must set AVFilterLink.out_formats on every input link and* AVFilterLink.in_formats on every output link to a list of pixel/sample* formats that the filter supports on that link. For audio links, this* filter must also set @ref AVFilterLink.in_samplerates "in_samplerates" /* @ref AVFilterLink.out_samplerates "out_samplerates" and* @ref AVFilterLink.in_channel_layouts "in_channel_layouts" /* @ref AVFilterLink.out_channel_layouts "out_channel_layouts" analogously.** This callback may be NULL for filters with one input, in which case* libavfilter assumes that it supports all input formats and reserves* them on output.** @return zero on success, a negative value corresponding to an* AVERROR code otherwise*/int (*query_formats)(AVFilterContext *);int priv_size; ///< size of private data to allocate for the filterint flags_internal; ///< Additional flags for avfilter internal use only./*** Used by the filter registration system. Must not be touched by any other* code.*/struct AVFilter *next;/*** Make the filter instance process a command.** @param cmd the command to process, for handling simplicity all commands must be alphanumeric only* @param arg the argument for the command* @param res a buffer with size res_size where the filter(s) can return a response. This must not change when the command is* not supported.* @param flags if AVFILTER_CMD_FLAG_FAST is set and the command would be* time consuming then a filter should treat it like an unsupported command** @returns >=0 on success otherwise an error code.* AVERROR(ENOSYS) on unsupported commands*/int (*process_command)(AVFilterContext *, const char *cmd, const char *arg, char *res, int res_len, int flags);/*** Filter initialization function, alternative to the init()* callback. Args contains the user-supplied parameters, opaque is* used for providing binary data.*/int (*init_opaque)(AVFilterContext *ctx, void *opaque);/*** Filter activation function.** Called when any processing is needed from the filter, instead of any* filter_frame and request_frame on pads.** The function must examine inlinks and outlinks and perform a single* step of processing. If there is nothing to do, the function must do* nothing and not return an error. If more steps are or may be* possible, it must use ff_filter_set_ready() to schedule another* activation.*/int (*activate)(AVFilterContext *ctx);
} AVFilter;
AVFilterContext-filter实例,管理filter与外部的联系
filter实例,管理filter与外部的联系
重点
struct AVFilterContext
{const AVFilter *filter;char *name;AVFilterPad *input_pads;AVFilterLink **inputs;unsigned nb_inputsAVFilterPad *output_pads;AVFilterLink **outputs;unsigned nb_outputs;struct AVFilterGraph *graph; // 从属于哪个AVFilterGraph
}
完整结构体
/** An instance of a filter */
struct AVFilterContext {const AVClass *av_class; ///< needed for av_log() and filters common optionsconst AVFilter *filter; ///< the AVFilter of which this is an instancechar *name; ///< name of this filter instanceAVFilterPad *input_pads; ///< array of input padsAVFilterContext-filter实例,管理filter与外部的联系AVFilterLink **inputs; ///< array of pointers to input linksunsigned nb_inputs; ///< number of input padsAVFilterPad *output_pads; ///< array of output padsAVFilterLink **outputs; ///< array of pointers to output linksunsigned nb_outputs; ///< number of output padsvoid *priv; ///< private data for use by the filterstruct AVFilterGraph *graph; ///< filtergraph this filter belongs to/*** Type of multithreading being allowed/used. A combination of* AVFILTER_THREAD_* flags.** May be set by the caller before initializing the filter to forbid some* or all kinds of multithreading for this filter. The default is allowing* everything.** When the filter is initialized, this field is combined using bit AND with* AVFilterGraph.thread_type to get the final mask used for determining* allowed threading types. I.e. a threading type needs to be set in both* to be allowed.** After the filter is initialized, libavfilter sets this field to the* threading type that is actually used (0 for no multithreading).*/int thread_type;/*** An opaque struct for libavfilter internal use.*/AVFilterInternal *internal;struct AVFilterCommand *command_queue;char *enable_str; ///< enable expression stringvoid *enable; ///< parsed expression (AVExpr*)double *var_values; ///< variable values for the enable expressionint is_disabled; ///< the enabled state from the last expression evaluation/*** For filters which will create hardware frames, sets the device the* filter should create them in. All other filters will ignore this field:* in particular, a filter which consumes or processes hardware frames will* instead use the hw_frames_ctx field in AVFilterLink to carry the* hardware context information.*/AVBufferRef *hw_device_ctx;/*** Max number of threads allowed in this filter instance.* If <= 0, its value is ignored.* Overrides global number of threads set per filter graph.*/int nb_threads;/*** Ready status of the filter.* A non-0 value means that the filter needs activating;* a higher value suggests a more urgent activation.*/unsigned ready;/*** Sets the number of extra hardware frames which the filter will* allocate on its output links for use in following filters or by* the caller.** Some hardware filters require all frames that they will use for* output to be defined in advance before filtering starts. For such* filters, any hardware frame pools used for output must therefore be* of fixed size. The extra frames set here are on top of any number* that the filter needs internally in order to operate normally.** This field must be set before the graph containing this filter is* configured.*/int extra_hw_frames;
};
AVFilterLink-定义两个filters之间的联接
重点
struct AVFilterLink
{AVFilterContext *src;AVFilterPad *srcpad;AVFilterContext *dst;AVFilterPad *dstpad;struct AVFilterGraph *graph;
};
完整结构体
/*** A link between two filters. This contains pointers to the source and* destination filters between which this link exists, and the indexes of* the pads involved. In addition, this link also contains the parameters* which have been negotiated and agreed upon between the filter, such as* image dimensions, format, etc.** Applications must not normally access the link structure directly.* Use the buffersrc and buffersink API instead.* In the future, access to the header may be reserved for filters* implementation.*/
struct AVFilterLink {AVFilterContext *src; ///< source filterAVFilterPad *srcpad; ///< output pad on the source filterAVFilterContext *dst; ///< dest filterAVFilterPad *dstpad; ///< input pad on the dest filterenum AVMediaType type; ///< filter media type/* These parameters apply only to video */int w; ///< agreed upon image widthint h; ///< agreed upon image heightAVRational sample_aspect_ratio; ///< agreed upon sample aspect ratio/* These parameters apply only to audio */uint64_t channel_layout; ///< channel layout of current buffer (see libavutil/channel_layout.h)int sample_rate; ///< samples per secondint format; ///< agreed upon media format/*** Define the time base used by the PTS of the frames/samples* which will pass through this link.* During the configuration stage, each filter is supposed to* change only the output timebase, while the timebase of the* input link is assumed to be an unchangeable property.*/AVRational time_base;/**************************************************************** All fields below this line are not part of the public API. They* may not be used outside of libavfilter and can be changed and* removed at will.* New public fields should be added right above.***************************************************************//*** Lists of formats and channel layouts supported by the input* and output filters respectively. These lists are used for negotiating the format* to actually be used, which will be loaded into the format and* channel_layout members, above, when chosen.**/AVFilterFormats *in_formats;AVFilterFormats *out_formats;/*** Lists of channel layouts and sample rates used for automatic* negotiation.*/AVFilterFormats *in_samplerates;AVFilterFormats *out_samplerates;struct AVFilterChannelLayouts *in_channel_layouts;struct AVFilterChannelLayouts *out_channel_layouts;/*** Audio only, the destination filter sets this to a non-zero value to* request that buffers with the given number of samples should be sent to* it. AVFilterPad.needs_fifo must also be set on the corresponding input* pad.* Last buffer before EOF will be padded with silence.*/int request_samples;/** stage of the initialization of the link properties (dimensions, etc) */enum {AVLINK_UNINIT = 0, ///< not startedAVLINK_STARTINIT, ///< started, but incompleteAVLINK_INIT ///< complete} init_state;/*** Graph the filter belongs to.*/struct AVFilterGraph *graph;/*** Current timestamp of the link, as defined by the most recent* frame(s), in link time_base units.*/int64_t current_pts;/*** Current timestamp of the link, as defined by the most recent* frame(s), in AV_TIME_BASE units.*/int64_t current_pts_us;/*** Index in the age array.*/int age_index;/*** Frame rate of the stream on the link, or 1/0 if unknown or variable;* if left to 0/0, will be automatically copied from the first input* of the source filter if it exists.** Sources should set it to the best estimation of the real frame rate.* If the source frame rate is unknown or variable, set this to 1/0.* Filters should update it if necessary depending on their function.* Sinks can use it to set a default output frame rate.* It is similar to the r_frame_rate field in AVStream.*/AVRational frame_rate;/*** Buffer partially filled with samples to achieve a fixed/minimum size.*/AVFrame *partial_buf;/*** Size of the partial buffer to allocate.* Must be between min_samples and max_samples.*/int partial_buf_size;/*** Minimum number of samples to filter at once. If filter_frame() is* called with fewer samples, it will accumulate them in partial_buf.* This field and the related ones must not be changed after filtering* has started.* If 0, all related fields are ignored.*/int min_samples;/*** Maximum number of samples to filter at once. If filter_frame() is* called with more samples, it will split them.*/int max_samples;/*** Number of channels.*/int channels;/*** Link processing flags.*/unsigned flags;/*** Number of past frames sent through the link.*/int64_t frame_count_in, frame_count_out;/*** A pointer to a FFFramePool struct.*/void *frame_pool;/*** True if a frame is currently wanted on the output of this filter.* Set when ff_request_frame() is called by the output,* cleared when a frame is filtered.*/int frame_wanted_out;/*** For hwaccel pixel formats, this should be a reference to the* AVHWFramesContext describing the frames.*/AVBufferRef *hw_frames_ctx;#ifndef FF_INTERNAL_FIELDS/*** Internal structure members.* The fields below this limit are internal for libavfilter's use* and must in no way be accessed by applications.*/char reserved[0xF000];#else /* FF_INTERNAL_FIELDS *//*** Queue of frames waiting to be filtered.*/FFFrameQueue fifo;/*** If set, the source filter can not generate a frame as is.* The goal is to avoid repeatedly calling the request_frame() method on* the same link.*/int frame_blocked_in;/*** Link input status.* If not zero, all attempts of filter_frame will fail with the* corresponding code.*/int status_in;/*** Timestamp of the input status change.*/int64_t status_in_pts;/*** Link output status.* If not zero, all attempts of request_frame will fail with the* corresponding code.*/int status_out;#endif /* FF_INTERNAL_FIELDS */};
定义两个filters之间的联接
重点
struct AVFilterLink
{AVFilterContext *src;AVFilterPad *srcpad;AVFilterContext *dst;AVFilterPad *dstpad;struct AVFilterGraph *graph;
};
AVFilterPad-定义filter的输入/输出接⼝
定义filter的输⼊/输出接⼝
重点
struct AVFilterPad
{const char *name;AVFrame *(*get_video_buffer)(AVFilterLink *link, int w, int h);AVFrame *(*get_audio_buffer)(AVFilterLink *link, int nb_samples);int (*filter_frame)(AVFilterLink *link, AVFrame *frame);int (*request_frame)(AVFilterLink *link);
}
完整结构体
/*** A filter pad used for either input or output.*/
struct AVFilterPad {/*** Pad name. The name is unique among inputs and among outputs, but an* input may have the same name as an output. This may be NULL if this* pad has no need to ever be referenced by name.*/const char *name;/*** AVFilterPad type.*/enum AVMediaType type;/*** Callback function to get a video buffer. If NULL, the filter* system will* use ff_default_get_video_buffer().** Input video pads only.*/AVFrame *(*get_video_buffer)(AVFilterLink *link, int w, int h);/*** Callback function to get an audio buffer. If NULL, the filter* system will* use ff_default_get_audio_buffer().** Input audio pads only.*/AVFrame *(*get_audio_buffer)(AVFilterLink *link, int nb_samples);/*** Filtering callback. This is where a filter receives a frame with* audio/video data and should do its processing.** Input pads only.** @return >= 0 on success, a negative AVERROR on error. This function* must ensure that frame is properly unreferenced on error if it* hasn't been passed on to another filter.*/int (*filter_frame)(AVFilterLink *link, AVFrame *frame);/*** Frame poll callback. This returns the number of immediately available* samples. It should return a positive value if the next request_frame()* is guaranteed to return one frame (with no delay).** Defaults to just calling the source poll_frame() method.** Output pads only.*/int (*poll_frame)(AVFilterLink *link);/*** Frame request callback. A call to this should result in some progress* towards producing output over the given link. This should return zero* on success, and another value on error.** Output pads only.*/int (*request_frame)(AVFilterLink *link);/*** Link configuration callback.** For output pads, this should set the link properties such as* width/height. This should NOT set the format property - that is* negotiated between filters by the filter system using the* query_formats() callback before this function is called.** For input pads, this should check the properties of the link, and update* the filter's internal state as necessary.** For both input and output filters, this should return zero on success,* and another value on error.*/int (*config_props)(AVFilterLink *link);/*** The filter expects a fifo to be inserted on its input link,* typically because it has a delay.** input pads only.*/int needs_fifo;/*** The filter expects writable frames from its input link,* duplicating data buffers if needed.** input pads only.*/int needs_writable;
};
AVFilterInOut-过滤器链输⼊/输出的链接列表
/*** A linked-list of the inputs/outputs of the filter chain.** This is mainly useful for avfilter_graph_parse() / avfilter_graph_parse2(),* where it is used to communicate open (unlinked) inputs and outputs from and* to the caller.* This struct specifies, per each not connected pad contained in the graph, the* filter context and the pad index required for establishing a link.*/
typedef struct AVFilterInOut {/** unique name for this input/output in the list */char *name;/** filter context associated to this input/output */AVFilterContext *filter_ctx;/** index of the filt_ctx pad to use for linking */int pad_idx;/** next input/input in the list, NULL if this is the last */struct AVFilterInOut *next;
} AVFilterInOut;
- 在AVFilter模块中定义了AVFilter结构,很个AVFilter都是具有独⽴功能的节点,如scale filter的作⽤就是进⾏图像尺⼨变换,overlay filter的作⽤就是进⾏图像的叠加。
- 这⾥需要重点提的是两个特别的filter,⼀个是buffer,⼀个是buffersink,
- 滤波器buffer代表filter graph中的源头,原始数据就往这个filter节点输⼊的;
- ⽽滤波器buffersink代表filter graph中的输出节点,处理完成的数据从这个filter节点输出
2 函数使⽤
// 获取FFmpeg中定义的filter,调⽤该⽅法前需要先调⽤avfilter_register_all();进⾏滤波器注册
AVFilter avfilter_get_by_name(const char name);
// 往源滤波器buffer中输⼊待处理的数据
int av_buffersrc_add_frame(AVFilterContext ctx, AVFrame frame);
// 从⽬的滤波器buffersink中获取处理完的数据
int av_buffersink_get_frame(AVFilterContext ctx, AVFrame frame);
// 创建⼀个滤波器图filter graph
AVFilterGraph *avfilter_graph_alloc(void);
// 创建⼀个滤波器实例AVFilterContext,并添加到AVFilterGraph中
int avfilter_graph_create_filter(AVFilterContext **filt_ctx, const AVFilter *filt,
const char name, const char args, void *opaque,
AVFilterGraph *graph_ctx);
// 连接两个滤波器节点
int avfilter_link(AVFilterContext *src, unsigned srcpad,
AVFilterContext *dst, unsigned dstpad);
3 AVFilter主体框架流程
在利⽤AVFilter进⾏⾳视频数据处理前先将在进⾏的处理流程绘制出来,现在以FFmpeg filter官⽅⽂档中的⼀个例⼦为例进⾏说明。
[main]
input --> split ---------------------> overlay --> output| ^|[tmp] [flip]|+-----> crop --> vflip -------+
这个例⼦的处理流程如上所示,⾸先使⽤split滤波器将input流分成两路流(main和tmp),然后分别对两路流进⾏处理。对于tmp流,先经过crop滤波器进⾏裁剪处理,再经过flip滤波器进⾏垂直⽅向上的翻转操作,输出的结果命名为flip流。再将main流和flip流输⼊到overlay滤波器进⾏合成操作。上图的input就是上⾯提过的buffer源滤波器,output就是上⾯的提过的buffersink滤波器。上图中每个节点都是⼀个AVFilterContext,每个连线就是AVFliterLink。所有这些信息都统⼀由AVFilterGraph来管理。
实现流程
读入文件
- 准备
yuv
数据,放入build
目录下
- 打开输入文件,设置分辨率参数
FILE* inFile = NULL;
const char* inFileName = "music.yuv";
fopen_s(&inFile, inFileName, "rb");
if (!inFile) {printf("Fail to open file\n");return -1;
}
int in_width = 1920;
int in_height = 1080;
- 打开输出文件
FILE* outFile = NULL;
const char* outFileName = "out.yuv";
fopen_s(&outFile, outFileName, "wb");
if (!outFile) {printf("Fail to create file for output\n");return -1;
}
配置滤镜
- 首先需要有一个滤镜图
AVFilterGraph
,用于管理整个滤镜系统
AVFilterGraph* filter_graph = avfilter_graph_alloc();
if (!filter_graph) {printf("Fail to create filter graph!\n");return -1;
}
- 配置
buffer
滤镜,为输入源滤镜 - 需要配置分辨率、像素格式、时间基、像素宽高比(通常是1/1)
char args[512];
sprintf(args,"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",in_width, in_height, AV_PIX_FMT_YUV420P,1, 25, 1, 1);
- 对于每一种滤镜,都是需要先创建滤镜
avfilter_get_by_name
- 然后再配置滤镜到滤镜图
avfilter_graph_create_filter
const AVFilter* bufferSrc = avfilter_get_by_name("buffer"); // AVFilterGraph的输入源
AVFilterContext* bufferSrc_ctx;
ret = avfilter_graph_create_filter(&bufferSrc_ctx, bufferSrc, "in", args, NULL, filter_graph);
if (ret < 0) {printf("Fail to create filter bufferSrc\n");return -1;
}
- 配置
buffersink
滤镜,为输出滤镜 - 输出滤镜的配置需要使用
AVBufferSinkParams
结构体 - 设置像素格式的时候,使用的是一个数组,会选择最合适的一种格式
AVBufferSinkParams *bufferSink_params;
enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE };
bufferSink_params->pixel_fmts = pix_fmts;
- 同样是先创建滤镜,然后再使用配置信息配置滤镜图
const AVFilter* bufferSink = avfilter_get_by_name("buffersink");
ret = avfilter_graph_create_filter(&bufferSink_ctx, bufferSink, "out", NULL,bufferSink_params, filter_graph);
if (ret < 0) {printf("Fail to create filter sink filter\n");return -1;
}
- 配置
split
滤镜,这个滤镜用于将一路视频流变为多路视频流输出 - 将视频流分为两路,用于后续的叠加操作
split
滤镜中的outputs
参数表示将视频流分成多少路输出
const AVFilter *splitFilter = avfilter_get_by_name("split");
AVFilterContext *splitFilter_ctx;
ret = avfilter_graph_create_filter(&splitFilter_ctx, splitFilter, "split", "outputs=2",NULL, filter_graph);
if (ret < 0) {printf("Fail to create split filter\n");return -1;
}
- 配置
crop
滤镜,这个滤镜用于裁剪视频 out_w
和out_h
参数表示输出的视频宽高,x
和y
表示裁剪的起始点坐标,in_w
和in_h
表示输入的视频的宽高- 参数之间使用
:
隔开
const AVFilter *cropFilter = avfilter_get_by_name("crop");
AVFilterContext *cropFilter_ctx;
ret = avfilter_graph_create_filter(&cropFilter_ctx, cropFilter, "crop","out_w=iw:out_h=ih/2:x=0:y=0", NULL, filter_graph);
if (ret < 0) {printf("Fail to create crop filter\n");return -1;
}
- 配置
vflip
滤镜,这个滤镜主要是用于视频的垂直翻转
const AVFilter *vflipFilter = avfilter_get_by_name("vflip");
AVFilterContext *vflipFilter_ctx;
ret = avfilter_graph_create_filter(&vflipFilter_ctx, vflipFilter, "vflip", NULL, NULL, filter_graph);
if (ret < 0) {printf("Fail to create vflip filter\n");return -1;
}
- 配置
overlay
滤镜,用于叠加两段视频画面 main_w(W)
和main_h(H)
表示被叠加视频的宽高、x
和y
表示叠加的初始位置
const AVFilter *overlayFilter = avfilter_get_by_name("overlay");
AVFilterContext *overlayFilter_ctx;
ret = avfilter_graph_create_filter(&overlayFilter_ctx, overlayFilter, "overlay","x=0:y=H/2", NULL, filter_graph);
if (ret < 0) {printf("Fail to create overlay filter\n");return -1;
}
链接滤镜
- 创建好所有的滤镜之后,需要把滤镜链接起来
- 这里的链接顺序是:
buffer->spilt->video_0->overlay ->buffersink | |video_1->crop->vflip
- 根据顺序链接起来
// src filter to split filter
ret = avfilter_link(bufferSrc_ctx, 0, splitFilter_ctx, 0);
if (ret != 0) {printf("Fail to link src filter and split filter\n");return -1;
}
// split filter's first pad to overlay filter's main pad
ret = avfilter_link(splitFilter_ctx, 0, overlayFilter_ctx, 0);
if (ret != 0) {printf("Fail to link split filter and overlay filter main pad\n");return -1;
}
// split filter's second pad to crop filter
ret = avfilter_link(splitFilter_ctx, 1, cropFilter_ctx, 0);
if (ret != 0) {printf("Fail to link split filter's second pad and crop filter\n");return -1;
}
// crop filter to vflip filter
ret = avfilter_link(cropFilter_ctx, 0, vflipFilter_ctx, 0);
if (ret != 0) {printf("Fail to link crop filter and vflip filter\n");return -1;
}
// vflip filter to overlay filter's second pad
ret = avfilter_link(vflipFilter_ctx, 0, overlayFilter_ctx, 1);
if (ret != 0) {printf("Fail to link vflip filter and overlay filter's second pad\n");return -1;
}
// overlay filter to sink filter
ret = avfilter_link(overlayFilter_ctx, 0, bufferSink_ctx, 0);
if (ret != 0) {printf("Fail to link overlay filter and sink filter\n");return -1;
}
生成滤镜图
- 链接之后需要配置滤镜图,生成完整的一套滤镜系统
// check filter graph
ret = avfilter_graph_config(filter_graph, NULL);
if (ret < 0) {printf("Fail in filter graph\n");return -1;
}
- 可以将滤镜图的配置信息
dump
到文件中
char *graph_str = avfilter_graph_dump(filter_graph, NULL);
FILE* graphFile = NULL;
fopen_s(&graphFile, "graphFile.txt", "w"); // 打印filtergraph的具体情况
fprintf(graphFile, "%s", graph_str);
av_free(graph_str);
生成输出文件
- 为输入文件和输出文件分配一帧的缓冲区
AVFrame *frame_in = av_frame_alloc();
unsigned char *frame_buffer_in = (unsigned char *)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, in_width, in_height, 1));
av_image_fill_arrays(frame_in->data, frame_in->linesize, frame_buffer_in,AV_PIX_FMT_YUV420P, in_width, in_height, 1);AVFrame *frame_out = av_frame_alloc();
unsigned char *frame_buffer_out = (unsigned char *)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, in_width, in_height, 1));
av_image_fill_arrays(frame_out->data, frame_out->linesize, frame_buffer_out,AV_PIX_FMT_YUV420P, in_width, in_height, 1);
- 设置
frame
的参数
frame_in->width = in_width;
frame_in->height = in_height;
frame_in->format = AV_PIX_FMT_YUV420P;
- 读取
yuv
输入文件
// 读取yuv数据 1 + 1/4 + 1/4 = 3/2
if (fread(frame_buffer_in, 1, in_width*in_height * 3 / 2, inFile) != in_width*in_height * 3 / 2) {break;
}
//input Y,U,V
frame_in->data[0] = frame_buffer_in;
frame_in->data[1] = frame_buffer_in + in_width*in_height; //偏移 1
frame_in->data[2] = frame_buffer_in + in_width*in_height * 5 / 4; //偏移 1 + 1/4 = 5/4
- 将
frame
拷贝到滤镜图中的buffer
中,这里无需手动释放frame
,内存操作由滤镜系统管理
if (av_buffersrc_add_frame(bufferSrc_ctx, frame_in) < 0) { //把frame拷贝到bufferSrc中,frame无需手动释放printf("Error while add frame.\n");break;
}
- 获取滤镜图处理后
buffersink
的数据,拷贝到frame
中 - 这里需要手动释放
frame
的内存,后续要调用av_frame_unref
/* pull filtered pictures from the filtergraph */
ret = av_buffersink_get_frame(bufferSink_ctx, frame_out); //拷贝BufferSink帧数据到frame
if (ret < 0)break;
- 写入输出的
yuv
文件
for (int i = 0; i < frame_out->height; i++) {fwrite(frame_out->data[0] + frame_out->linesize[0] * i, 1, frame_out->width, outFile);
}
for (int i = 0; i < frame_out->height / 2; i++) {fwrite(frame_out->data[1] + frame_out->linesize[1] * i, 1, frame_out->width / 2, outFile);
}
for (int i = 0; i < frame_out->height / 2; i++) {fwrite(frame_out->data[2] + frame_out->linesize[2] * i, 1, frame_out->width / 2, outFile);
}
- 释放输出
frame
的内存
av_frame_unref(frame_out); //需要手动释放
结束操作
- 结束的时候需要关闭文件和释放对应
AVFrame
结构体内存 - 释放滤镜图内存,会自动释放与之相关的滤镜的内存
fclose(inFile);
fclose(outFile);av_frame_free(&frame_in);
av_frame_free(&frame_out);
avfilter_graph_free(&filter_graph); // 内部去释放AVFilterContext产生的内存
整体代码
#include <stdio.h>#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavfilter/avfilter.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/opt.h>
#include <libavutil/imgutils.h>int main(int argc, char** argv)
{int ret = 0;// input yuvFILE* inFile = NULL;const char* inFileName = "music.yuv";fopen_s(&inFile, inFileName, "rb");if (!inFile) {printf("Fail to open file\n");return -1;}int in_width = 1920;int in_height = 1080;// output yuvFILE* outFile = NULL;const char* outFileName = "out.yuv";fopen_s(&outFile, outFileName, "wb");if (!outFile) {printf("Fail to create file for output\n");return -1;}// avfilter_register_all();AVFilterGraph* filter_graph = avfilter_graph_alloc();if (!filter_graph) {printf("Fail to create filter graph!\n");return -1;}// source filterchar args[512];sprintf(args,"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",in_width, in_height, AV_PIX_FMT_YUV420P,1, 25, 1, 1);const AVFilter* bufferSrc = avfilter_get_by_name("buffer"); // AVFilterGraph的输入源AVFilterContext* bufferSrc_ctx;ret = avfilter_graph_create_filter(&bufferSrc_ctx, bufferSrc, "in", args, NULL, filter_graph);if (ret < 0) {printf("Fail to create filter bufferSrc\n");return -1;}// sink filterAVBufferSinkParams *bufferSink_params;AVFilterContext* bufferSink_ctx;const AVFilter* bufferSink = avfilter_get_by_name("buffersink");enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE };bufferSink_params = av_buffersink_params_alloc();bufferSink_params->pixel_fmts = pix_fmts;ret = avfilter_graph_create_filter(&bufferSink_ctx, bufferSink, "out", NULL,bufferSink_params, filter_graph);if (ret < 0) {printf("Fail to create filter sink filter\n");return -1;}// split filterconst AVFilter *splitFilter = avfilter_get_by_name("split");AVFilterContext *splitFilter_ctx;ret = avfilter_graph_create_filter(&splitFilter_ctx, splitFilter, "split", "outputs=2",NULL, filter_graph);if (ret < 0) {printf("Fail to create split filter\n");return -1;}// crop filterconst AVFilter *cropFilter = avfilter_get_by_name("crop");AVFilterContext *cropFilter_ctx;ret = avfilter_graph_create_filter(&cropFilter_ctx, cropFilter, "crop","out_w=iw:out_h=ih/2:x=0:y=0", NULL, filter_graph);if (ret < 0) {printf("Fail to create crop filter\n");return -1;}// vflip filterconst AVFilter *vflipFilter = avfilter_get_by_name("vflip");AVFilterContext *vflipFilter_ctx;ret = avfilter_graph_create_filter(&vflipFilter_ctx, vflipFilter, "vflip", NULL, NULL, filter_graph);if (ret < 0) {printf("Fail to create vflip filter\n");return -1;}// overlay filterconst AVFilter *overlayFilter = avfilter_get_by_name("overlay");AVFilterContext *overlayFilter_ctx;ret = avfilter_graph_create_filter(&overlayFilter_ctx, overlayFilter, "overlay","x=0:y=H/2", NULL, filter_graph);if (ret < 0) {printf("Fail to create overlay filter\n");return -1;}// src filter to split filterret = avfilter_link(bufferSrc_ctx, 0, splitFilter_ctx, 0);if (ret != 0) {printf("Fail to link src filter and split filter\n");return -1;}// split filter's first pad to overlay filter's main padret = avfilter_link(splitFilter_ctx, 0, overlayFilter_ctx, 0);if (ret != 0) {printf("Fail to link split filter and overlay filter main pad\n");return -1;}// split filter's second pad to crop filterret = avfilter_link(splitFilter_ctx, 1, cropFilter_ctx, 0);if (ret != 0) {printf("Fail to link split filter's second pad and crop filter\n");return -1;}// crop filter to vflip filterret = avfilter_link(cropFilter_ctx, 0, vflipFilter_ctx, 0);if (ret != 0) {printf("Fail to link crop filter and vflip filter\n");return -1;}// vflip filter to overlay filter's second padret = avfilter_link(vflipFilter_ctx, 0, overlayFilter_ctx, 1);if (ret != 0) {printf("Fail to link vflip filter and overlay filter's second pad\n");return -1;}// overlay filter to sink filterret = avfilter_link(overlayFilter_ctx, 0, bufferSink_ctx, 0);if (ret != 0) {printf("Fail to link overlay filter and sink filter\n");return -1;}// check filter graphret = avfilter_graph_config(filter_graph, NULL);if (ret < 0) {printf("Fail in filter graph\n");return -1;}char *graph_str = avfilter_graph_dump(filter_graph, NULL);FILE* graphFile = NULL;fopen_s(&graphFile, "graphFile.txt", "w"); // 打印filtergraph的具体情况fprintf(graphFile, "%s", graph_str);av_free(graph_str);AVFrame *frame_in = av_frame_alloc();unsigned char *frame_buffer_in = (unsigned char *)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, in_width, in_height, 1));av_image_fill_arrays(frame_in->data, frame_in->linesize, frame_buffer_in,AV_PIX_FMT_YUV420P, in_width, in_height, 1);AVFrame *frame_out = av_frame_alloc();unsigned char *frame_buffer_out = (unsigned char *)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, in_width, in_height, 1));av_image_fill_arrays(frame_out->data, frame_out->linesize, frame_buffer_out,AV_PIX_FMT_YUV420P, in_width, in_height, 1);frame_in->width = in_width;frame_in->height = in_height;frame_in->format = AV_PIX_FMT_YUV420P;uint32_t frame_count = 0;while (1) {// 读取yuv数据 1 + 1/4 + 1/4 = 3/2if (fread(frame_buffer_in, 1, in_width*in_height * 3 / 2, inFile) != in_width*in_height * 3 / 2) {break;}//input Y,U,Vframe_in->data[0] = frame_buffer_in;frame_in->data[1] = frame_buffer_in + in_width*in_height; //偏移 1frame_in->data[2] = frame_buffer_in + in_width*in_height * 5 / 4; //偏移 1 + 1/4 = 5/4if (av_buffersrc_add_frame(bufferSrc_ctx, frame_in) < 0) { //把frame拷贝到bufferSrc中,frame无需手动释放printf("Error while add frame.\n");break;}// filter内部自己处理/* pull filtered pictures from the filtergraph */ret = av_buffersink_get_frame(bufferSink_ctx, frame_out); //拷贝BufferSink帧数据到frameif (ret < 0)break;//output Y,U,Vif (frame_out->format == AV_PIX_FMT_YUV420P) {for (int i = 0; i < frame_out->height; i++) {fwrite(frame_out->data[0] + frame_out->linesize[0] * i, 1, frame_out->width, outFile);}for (int i = 0; i < frame_out->height / 2; i++) {fwrite(frame_out->data[1] + frame_out->linesize[1] * i, 1, frame_out->width / 2, outFile);}for (int i = 0; i < frame_out->height / 2; i++) {fwrite(frame_out->data[2] + frame_out->linesize[2] * i, 1, frame_out->width / 2, outFile);}}++frame_count;if(frame_count % 25 == 0)printf("Process %d frame!\n",frame_count);av_frame_unref(frame_out); //需要手动释放}fclose(inFile);fclose(outFile);av_frame_free(&frame_in);av_frame_free(&frame_out);avfilter_graph_free(&filter_graph); // 内部去释放AVFilterContext产生的内存return 0;
}
更多资料:https://github.com/0voice
相关文章:
【音视频】FFmpeg过滤器框架分析
ffmpeg的filter⽤起来是和Gstreamer的plugin是⼀样的概念,通过avfilter_link,将各个创建好的filter按⾃⼰想要的次序链接到⼀起,然后avfilter_graph_config之后,就可以正常使⽤。 ⽐较常⽤的滤镜有:scale、trim、over…...
硬盘损坏数据恢复后对python程序的影响
最近硬盘突然间坏掉了,让数据商恢复了2个月今天终于拿到了恢复后的数据。 但是一测试问题就来了: PS E:\geosystem> python manage.py runserver 0.0.0.0:5000 Unhandled exception in thread started by <function check_errors.<locals>.…...
Azure Devops - 尝试一下在Pipeline中使用Self-hosted Windows agent
1.简单介绍 Azure Devops是微软提供的辅助软件的开发,测试,部署以及计划和进度跟踪的平台,通过Azure Devops可以使开发者,项目经理,运维人员在软件的整个生命周期中更紧密地合作,同时借助Continuous Integ…...
Linux红帽:RHCSA认证知识讲解(十 四)分区管理、交换分区,创建逻辑卷与调整逻辑卷的大小
Linux红帽:RHCSA认证知识讲解(十 四)分区管理、交换分区,创建逻辑卷与调整逻辑卷的大小 前言一、分区管理,使用fdisk管理分区1.1 找到硬盘1.2 使用fdisk分区1.3 格式化分区1.4 挂载分区 二、创建逻辑卷,调整…...
详解 Unreal Engine(虚幻引擎)
详解 Unreal Engine(虚幻引擎) Unreal Engine(简称 UE)是由 Epic Games 开发的一款全球领先的实时渲染引擎,自 1998 年随首款游戏《Unreal》问世以来,已发展成为覆盖 游戏开发、影视制作、建筑可视化、汽车…...
【Linux网络】Http服务优化 - 增加请求后缀、状态码描述、重定向、自动跳转及注册多功能服务
📢博客主页:https://blog.csdn.net/2301_779549673 📢博客仓库:https://gitee.com/JohnKingW/linux_test/tree/master/lesson 📢欢迎点赞 👍 收藏 ⭐留言 📝 如有错误敬请指正! &…...
Docker compose 部署微服务项目(从0-1出发纯享版无废话)
目录 一.Docker安装 (1)安装依赖 (2)安装Docker (3)启动Docker服务 (4)系统配置 (5)镜像加速配置 (6)验证安装 二.编写Docke…...
C#学习第19天:多线程
什么是多线程? 定义:多线程允许一个程序分成多个独立的执行路径来进行并发操作。用途:提高程序的执行效率,特别是在I/O操作、计算密集型任务和用户交互中。 多线程核心概念 1. 创建和管理线程 使用 Thread 类 using System; u…...
day7 python针对心脏病数据集预处理
在数据科学与机器学习领域,数据预处理与可视化是挖掘数据价值的关键前置步骤。本文以 heart1.csv 心脑血管疾病数据集为例,借助 Python 中的 pandas、matplotlib、seaborn 以及 scikit-learn 库,详细演示数据加载、缺失值处理、特征相关性分析…...
树莓派学习专题<9>:使用V4L2驱动获取摄像头数据--设定分辨率和帧率
树莓派学习专题<9>:使用V4L2驱动获取摄像头数据--设定分辨率和帧率 1. 设定分辨率2. 设定帧率3. 设定分辨率代码解析4. 获取与设定帧率代码解析5. 实测 1. 设定分辨率 使用如下代码设定摄像头的分辨率: #define CAMERA_RESOLUTI…...
模态链:利用视觉-语言模型从多模态人类视频中学习操作程序
25年4月来自谷歌 DeepMind 和斯坦福大学的论文“Chain-of-Modality: Learning Manipulation Programs from Multimodal Human Videos with Vision-Language-Models”。 从人类视频中学习执行操作任务,是一种很有前景的机器人教学方法。然而,许多操作任务…...
JAVAEE初阶01
个人主页 JavaSE专栏 JAVAEE初阶01 操作系统 1.对下(硬件)管理各种计算机设备 2.对上(软件)为各种软件提供一个稳定的运行环境 线程 运行的程序在操作系统中以进程的形式存在 进程是系统分配资源的最小单位 进程与线程的关…...
【网络安全】用 Linux 命令行 CLI 日志文件处理指南
Linux 命令行 CLI 神技回忆录:日志文件处理指南(以 Zeek Logs 为例) 1. CLI简介2. 基础操作3. 文件读取4. 查找与筛选5. 进阶操作6. Zeek 日志骚操作7. 结语 1. CLI简介 在数据分析的世界里,图形界面(GUI)…...
[C++] 高精度乘法
目录 引入: 大整数比较比较方法例题1-青蛙计数题目描述 输入描述输出描述输入输出样例AC代码 高精度乘法模版高精度运算小合集(这集乘法上集加法) 注意: 若还没有学过高精度运算的话先去看高精度加法 引入: 大整数比较 比较方法 大整数比较可以使用此方法比较(注释有讲解): …...
反事实——AI与思维模型【82】
一、定义 反事实思维模型是一种心理认知模型,它指的是人们在头脑中对已经发生的事件进行否定,然后构建出一种可能性假设的思维活动。简单来说,就是思考“如果当时……,那么就会……”的情景。这种思维方式让我们能够超越现实的限制,设想不同的可能性和结果,从而对过去的…...
Java学习手册:Java开发常用的内置工具类包
以下是常用 Java 内置工具包。 • 日期时间处理工具包 • java.time包(JSR 310):这是 Java 8 引入的一套全新的日期时间 API,旨在替代陈旧的java.util.Date和java.util.Calendar类。其中的LocalDate用于表示不带时区的日期&…...
JAVA多线程(8.0)
目录 线程池 为什么使用线程池 线程池的使用 工厂类Executors(工厂模式) submit 实现一个线程池 线程池 为什么使用线程池 在前面我们都是通过new Thread() 来创建线程的,虽然在java中对线程的创建、中断、销毁、等值等功能提供了支持…...
通过门店销售明细表用Python Pandas得到每月每个门店的销冠和按月的同比环比数据
假设我在本地有Excel销售表,包含ID主键、门店ID、日期、销售员姓名和销售额,需要用Pandas统计出每个月所有门店和各门店销售额最高的人,不一定是一个人,以及他所在的门店ID和月总销售额。 步骤1:导入数据并处理日期 …...
详解最新链路追踪skywalking框架介绍、架构、环境本地部署配置、整合微服务springcloudalibaba 、日志收集、自定义链路追踪、告警等
1.skywalking介绍 多种监控手段,可以通过语言探针和service mesh 获得监控数据支持多种语言自动探针,包含java/net/nodejs轻量高效,无需大数据平台和大量的服务器资源模块化,UI、存储、集群管理都有多种机制可选支持告警优秀的可…...
【OSG学习笔记】Day 11: 文件格式与数据交换
OSG 常用文件格式简介 在开始转换前,先了解 OSG 生态中常见的文件格式: .osg:OSG 标准二进制格式,存储场景图数据,体积小、加载快,适合实时渲染。 .ive:OSG 标准文本格式,可读性强,便于手动编辑或调试场景图结构(本质是 XML 格式的文本描述)。 .osgb:OSG 二进制格…...
2025.04.26-美团春招笔试题-第二题
📌 点击直达笔试专栏 👉《大厂笔试突围》 💻 春秋招笔试突围在线OJ 👉 笔试突围OJ 02. 曼哈顿距离探测器 问题描述 K小姐正在研发一种城市交通探测器,该探测器能够检测城市中任意两个位置之间的曼哈顿距离是否恰好为特定值。曼哈顿距离是在直角坐标系中,两点之间…...
搭建基于火灾风险预测与防范的消防安全科普小程序
基于微信小程序的消防安全科普互动平台的设计与实现,是关于微信小程序的,知识课程学习,包括学习后答题。 技术栈主要采用微信小程序云开发,有下面的模块: 1.课程学习模块 2.资讯模块 3.答题模块 4.我的模块 还需…...
05--Altium Designer(AD)的详细安装
一、软件的下载 Altium Designer官网下载 1、临近五一的假期,想着搞个项目,且这个项目与PCB有关系,所以就下这个软件来玩玩。下面保姆级教大家安装。 2、选择适合自己的版本下载(我安装的是24的) 3、软件安装 1.下…...
药监平台上传数据报资源码不存在
问题:电子监管码上传药监平台提示“导入的资源码不存在” 现象:从生产系统导出的关联关系数据包上传到药监平台时显示: 原因:上传数据包的通道的资源码与数据包的资源码不匹配。 解决方法:检查药监平台和生产系统的药…...
DeepSeek预训练追求极致的训练效率的做法
DeepSeek在预训练阶段通过多种技术手段实现了极致的训练效率,其中包括采用FP8混合精度训练框架以降低计算和内存需求 ,创新性地引入Multi-head Latent Attention(MLA)压缩KV缓存以提升推理效率,以及基于Mixture-of-Experts(MoE)的稀疏计算架构以在保证性能的同时显著降低…...
Windows11系统中GIT下载
Windows11系统中GIT下载 0、GIT背景介绍0.0 GIT概述0.1 GIT诞生背景0.2 Linus Torvalds 的设计目标0.3 Git 的诞生(2005 年)0.4 Git 的后续发展0.5 为什么 Git 能成功? 1、资源下载地址1.1 官网资源1.2 站内资源 2、安装指导3、验证是否下载完…...
Maven的概念与初识Maven
目录 一、Maven的概念 1. 什么是Maven 2. 项目构建:从代码到部署的标准化流程 2.1 Maven构建生命周期 2.2 传统构建 vs Maven构建 3. 依赖管理:解决“JAR地狱”的利器 3.1 依赖声明 3.2 依赖传递与冲突解决 4. Maven仓库:依赖的存储…...
【Android】app调用wallpaperManager.setBitmap的隐藏权限
这是一个杞人忧天的问题,app中,可以通过wallpaperManager.setBitmap来设置壁纸, private void setWallpaper() {// 获取 WallpaperManager 实例WallpaperManager wallpaperManager WallpaperManager.getInstance(getApplicationContext());t…...
ORACLE数据库备份入门:第四部分:2-备份场景举例
下面以4个常见的场景为例,介绍如何规划备份方案。备份方案没有标准答案,需要根据实现情况来制定,也和管理员的个人使用习惯有很大相关性。 1 交易型数据库备份 以银行的交易系统为例,除了前一章节提到的关于RPO和RTO的指标外&am…...
STL中emplace实现原理是什么?
template <class... Args>void emplace_back (Args&&... args);这个是vector的emplace_back方法,用到的c11的语法有三个,分别是万能引用、完美转发、参数包。 参数包中的参数是用来构造vector<T>中的T对象。 假如我直接传的就是一个…...
血泪之arduino库文件找不到ArduinoJSON.h: No such file or directory错误原因
#include <ArduinoJson.h> 始终报这个错误, C:\techxixi_project\Arduino\test\camer\camertoserver\camertoserver.ino:6:10: fatal error: ArduinoJSON.h: No such file or directory 6 | #include <ArduinoJSON.h> | ^~~~~~~~~…...
通过门店销售明细表用PySpark得到每月每个门店的销冠和按月的同比环比数据
假设我在Amazon S3上有销售表的Parquet数据文件的路径,包含ID主键、门店ID、日期、销售员姓名和销售额,需要分别用PySpark的SparkSQL和Dataframe API统计出每个月所有门店和各门店销售额最高的人,不一定是一个人,以及他所在的门店…...
【前后端分离项目】Vue+Springboot+MySQL
文章目录 1.安装 Node.js2.配置 Node.js 环境3.安装 Node.js 国内镜像4.创建 Vue 项目5.运行 Vue 项目6.访问 Vue 项目7.创建 Spring Boot 项目8.运行 Spring Boot 项目9.访问 Spring Boot 项目10.实现 Vue 与 Spring Boot 联动11.安装 axios12.编写请求13.调用函数请求接口14.…...
图解 Redis 事务 ACID特性 |源码解析|EXEC、WATCH、QUEUE
写在前面 Redis 通过 MULTI、EXEC、WATCH 等命令来实现事务功能。Redis的事务是将多个命令请求打包,然后一次性、按照顺序的执行多个命令的机制,并且在事务执行期间,服务器不会中断事务而该去执行其他客户端的命令请求。 就像下面这样&#…...
w~嵌入式C语言~合集3
我自己的原文哦~ https://blog.51cto.com/whaosoft/13870307 一、单片机多任务事件驱动 单片机的ROM与RAM存贮空间有限,一般没有多线程可用,给复杂的单片机项目带来困扰。 经过多年的单片机项目实践,借鉴windows消息机制的思想ÿ…...
CMCC RAX3000M CH EC 算力版刷机(中国移动 RAX3000M 算力版)刷机
刷机前面的工作参考: https://blog.csdn.net/asdcls/article/details/147434218 刷机 eMMC Flash instructions: 1. SSH to RAX3000M, and backup everything, especially factory part. (data partition can be ignored, its useless.) 2. Write new GPT tabl…...
cron定时任务
cron定时任务 一、Cron表达式的定义 基础结构 Cron表达式是由空格分隔的6或7个字段组成的字符串,格式为: 秒 分 时 日 月 星期 [年]其中,年通常可以被省略 字段说明: 秒(0-59) 秒字段表示每分钟的哪一…...
1.4 大模型应用产品与技术架构
目录 一,产品架构 1.1 AI Embedded产品架构 1.2 AI Copilot产品架构 1.3 AI Agent产品架构 二,技术架构 2.1 纯Prompt 2.2 Agent Function Calling 2.3 RAG (Retrieval-Augmented Generation 检索增强生成) 2.4 Fine-tu…...
Android HAL HIDL
1 Android HAL HIDL 1.1 Android中查看有哪些HIDL HAL HIDL是Treble Interface的一部分。 adb root adb shell # lshal 1.2 Android打印C调用栈 #include <utils/CallStack.h> 在需要打印的地方加如下的定义。 android::CallStack stack("oem"); logcat | g…...
std::mutex底层实现原理
std::mutex是一个用于实现互斥访问的类,其具备两个成员函数——lock和unlock 锁的底层实现原理 锁的底层实现是基于原子操作的,这些原子操作是由指令支持的,因为单个指令是不能被中断的 一些与锁的实现有关的原子指令为: 待补充…...
性能提升手段--池化技术
看到hadoop代码里有ByteBufferPool,使用池子来避免频繁创建、销毁ByteBuffer,减轻GC压力,提高性能。 顺便总结一下池化技术 一、什么是池化技术? 池化(Pooling) 是一种资源管理策略,通过预先创建并复用资源(如数据库连接、线程、内存对象等)来提…...
精益数据分析(28/126):解读商业模式拼图与关键指标
精益数据分析(28/126):解读商业模式拼图与关键指标 在创业和数据分析的探索旅程中,每一次深入研究都可能带来新的启发和突破。今天,我们依旧带着共同进步的初心,深入解读《精益数据分析》中关于商业模式的…...
QT6 源(52)篇二:存储 c 语言字符串的类 QByteArray 的使用举例,
(3) (4) 谢谢...
工业摄像头通过USB接口实现图像
工业摄像头系列概览:类型与应用 工业摄像头系列涵盖了多种类型,以满足不同行业和应用的需求。以下是对工业摄像头系列的一些介绍: 一、主要类型与特点 USB工业摄像头 :这类摄像头通常通过USB接口与计算机连接,适用于…...
【BBDM】main.py -- notes
命令行接口 python main.py [OPTIONS]参数 参数类型默认值说明-c, --configstr"BBDM_base.yml"配置文件路径-s, --seedint1234随机种子,用于结果复现-r, --result_pathstr"results"结果保存目录-t, --trainflagFalse训练模式开关:…...
X86物理机安装iStoreOS软路由
目录 安装前准备 制作启动盘 安装系统到硬盘 启动与初始配置 常见问题与注意事项 参考资料 安装前准备 硬件设备 X86物理机(如普通电脑、J4125/N5105等小主机) U盘(建议容量≥2GB) 显示器、键盘(用于初始配置…...
QML Date:日期处理示例
目录 引言相关阅读QML Date对象知识概要格式化选项 DateExamples示例解析代码解析示例1:日期格式化示例2:不同地区的日期显示示例3:日期解析示例4:自定义格式示例5:日期与时间戳转换 运行效果总结下载链接 引言 在Qt …...
python 与Redis操作整理
以下是使用 Python 操作 Redis 的完整整理,涵盖基础操作、高级功能及最佳实践: 1. 安装与连接 (1) 安装库 pip install redis(2) 基础连接 import redis# 创建连接池(推荐复用连接) pool redis.ConnectionPool(hostlocalhost, …...
【Linux】环境监控系统软件框架
目录 tasks.h type.h main.c tasks.c makefile 运行结果 调用多线程框架,在主函数写好环境监控文件的函数,使用结构体封装环境指标的参数 最后使用makefile管理工程文件 tasks.h #include<pthread.h>#ifndef __TASK_H__ #define __TASK_H_…...
WPF实现数字孪生示例
WPF 数字孪生系统实现示例 数字孪生(Digital Twin)是通过数字化手段在虚拟空间中构建物理实体的精确数字模型,并实现虚实映射、实时交互和智能决策的技术。本文将展示如何使用WPF实现一个基础的数字孪生系统示例。 一、系统架构设计 1. 整体架构 +-------------------+ | …...